Aug 16, 2025 3:23:25 AM org.apache.karaf.main.Main launch INFO: Installing and starting initial bundles Aug 16, 2025 3:23:26 AM org.apache.karaf.main.Main launch INFO: All initial bundles installed and set to start Aug 16, 2025 3:23:26 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Trying to lock /tmp/karaf-0.23.0/lock Aug 16, 2025 3:23:26 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Lock acquired Aug 16, 2025 3:23:26 AM org.apache.karaf.main.Main$KarafLockCallback lockAcquired INFO: Lock acquired. Setting startlevel to 100 2025-08-16T03:23:27,107 | INFO | CM Configuration Updater (Update: pid=org.ops4j.pax.logging) | EventAdminConfigurationNotifier | 4 - org.ops4j.pax.logging.pax-logging-log4j2 - 2.2.8 | Logging configuration changed. (Event Admin service unavailable - no notification sent). 2025-08-16T03:23:28,407 | INFO | activator-1-thread-2 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Adding features: 21bfc474-1e7a-4f50-9d2b-0839f0169b3a/[0,0.0.0],odl-openflowplugin-flow-services-rest/[0.20.0,0.20.0],odl-openflowplugin-app-bulk-o-matic/[0.20.0,0.20.0],odl-infrautils-ready/[7.1.4,7.1.4],odl-jolokia/[11.0.0,11.0.0] 2025-08-16T03:23:28,579 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Changes to perform: 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Region: root 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to install: 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.el/jakarta.el-api/3.0.3 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.enterprise/cdi-api/2.0.SP1 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.interceptor/javax.interceptor-api/1.2.2 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.transaction/javax.transaction-api/1.2 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jasypt/1.9.3_1 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.javax-inject/1_3 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc/1.5.7 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-config/1.5.7 2025-08-16T03:23:28,580 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-common/1.5.7 2025-08-16T03:23:28,581 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-wrap/2.6.16/jar/uber 2025-08-16T03:23:28,581 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.jdbc/1.1.0 2025-08-16T03:23:28,582 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Installing bundles: 2025-08-16T03:23:28,582 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.el/jakarta.el-api/3.0.3 2025-08-16T03:23:28,586 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.enterprise/cdi-api/2.0.SP1 2025-08-16T03:23:28,588 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.interceptor/javax.interceptor-api/1.2.2 2025-08-16T03:23:28,589 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.transaction/javax.transaction-api/1.2 2025-08-16T03:23:28,590 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.jasypt/1.9.3_1 2025-08-16T03:23:28,591 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.javax-inject/1_3 2025-08-16T03:23:28,592 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc/1.5.7 2025-08-16T03:23:28,593 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-config/1.5.7 2025-08-16T03:23:28,594 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-common/1.5.7 2025-08-16T03:23:28,596 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-wrap/2.6.16/jar/uber 2025-08-16T03:23:28,600 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.jdbc/1.1.0 2025-08-16T03:23:28,633 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Starting bundles: 2025-08-16T03:23:28,634 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.url.wrap/2.6.16 2025-08-16T03:23:28,639 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 2025-08-16T03:23:28,642 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.interceptor-api/1.2.2 2025-08-16T03:23:28,644 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-08-16T03:23:28,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-08-16T03:23:28,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.osgi.service.jdbc/1.1.0.202212101352 2025-08-16T03:23:28,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 2025-08-16T03:23:28,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 2025-08-16T03:23:28,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 2025-08-16T03:23:28,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc/1.5.7 2025-08-16T03:23:28,656 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 2025-08-16T03:23:28,665 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Done. 2025-08-16T03:23:30,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Changes to perform: 2025-08-16T03:23:30,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Region: root 2025-08-16T03:23:30,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to uninstall: 2025-08-16T03:23:30,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-08-16T03:23:30,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Bundles to install: 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.checkerframework/checker-qual/3.49.3 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.code.gson/gson/2.13.1 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/guava/33.4.8-jre 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/failureaccess/1.0.3 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.googlecode.json-simple/json-simple/1.1.1 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.h2database/h2/2.3.232 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.rabbitmq/amqp-client/5.25.0 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/config/1.4.3 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/ssl-config-core_2.13/0.6.1 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-client/1.38.1 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-driver/1.38.1 2025-08-16T03:23:30,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-core/4.2.32 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-graphite/4.2.32 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-healthchecks/4.2.32 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jmx/4.2.32 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jvm/4.2.32 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-buffer/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-base/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-compression/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http2/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-common/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-handler/4.2.2.Final 2025-08-16T03:23:30,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-resolver/4.2.2.Final 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport/4.2.2.Final 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-classes-epoll/4.2.2.Final 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-epoll/4.2.2.Final/jar/linux-x86_64 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-unix-common/4.2.2.Final 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.activation/jakarta.activation-api/1.2.2 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.annotation/jakarta.annotation-api/1.3.5 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.servlet/jakarta.servlet-api/4.0.4 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.validation/jakarta.validation-api/2.0.2 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.ws.rs/jakarta.ws.rs-api/2.1.6 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.javassist/javassist/3.30.2-GA 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.servlet/javax.servlet-api/3.1.0 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.websocket/jakarta.websocket-api/1.1.2 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/karaf.branding/14.1.0 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.lz4/lz4-java/1.8.0 2025-08-16T03:23:30,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:net.bytebuddy/byte-buddy/1.17.5 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.agrona/agrona/1.15.2 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.antlr/antlr4-runtime/4.13.2 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.3.2 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.10.3 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.api/1.1.5 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.api/1.2.0 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.core/1.2.0 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.core/1.1.8 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.whiteboard/1.2.0 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.proxy/org.apache.aries.proxy/1.1.14 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.quiesce/org.apache.aries.quiesce.api/1.0.0 2025-08-16T03:23:30,979 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries/org.apache.aries.util/1.1.3 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-collections/commons-collections/3.2.2 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-beanutils/commons-beanutils/1.11.0 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-codec/commons-codec/1.15 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-lang3/3.17.0 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-text/1.13.0 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.felix/org.apache.felix.scr/2.2.6 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.geronimo.specs/geronimo-atinject_1.0_spec/1.2 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.blueprintstate/4.4.7 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.core/4.4.7 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.config/org.apache.karaf.config.command/4.4.7 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.blueprint/4.4.7 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.features/4.4.7 2025-08-16T03:23:30,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.kar/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.wrap/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.boot/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.features/org.apache.karaf.features.command/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.http/org.apache.karaf.http.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.instance/org.apache.karaf.instance.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.config/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.modules/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jdbc/org.apache.karaf.jdbc.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.kar/org.apache.karaf.kar.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.log/org.apache.karaf.log.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.management/org.apache.karaf.management.server/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.package/org.apache.karaf.package.core/4.4.7 2025-08-16T03:23:30,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.management/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.state/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.service/org.apache.karaf.service.core/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.commands/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.core/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.table/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.system/org.apache.karaf.system.core/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.web/org.apache.karaf.web.core/4.4.7 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-osgi/2.14.0 2025-08-16T03:23:30,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-scp/2.14.0 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-sftp/2.14.0 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jdt/ecj/3.26.0 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-client/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-continuation/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-http/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-io/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jaas/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jmx/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-security/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-server/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlet/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlets/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util/9.4.57.v20241219 2025-08-16T03:23:30,983 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util-ajax/9.4.57.v20241219 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-xml/9.4.57.v20241219 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-api/2.6.1 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2.external/aopalliance-repackaged/2.6.1 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-locator/2.6.1 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/osgi-resource-locator/1.0.3 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-utils/2.6.1 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet-core/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-client/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-common/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-server/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.inject/jersey-hk2/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.media/jersey-media-sse/2.47 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jline/jline/3.21.0 2025-08-16T03:23:30,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jolokia/jolokia-osgi/1.7.2 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jspecify/jspecify/1.0.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm/9.7.1 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-commons/9.7.1 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-tree/9.7.1 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-analysis/9.7.1 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-util/9.7.1 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-authn-api/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-cert/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service-impl/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-filterchain/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-idm-store-h2/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-jetty-auth-log-filter/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-api/0.21.0 2025-08-16T03:23:30,985 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-impl/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/repackaged-shiro/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro-api/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-tokenauthrealm/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-api/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-osgi-impl/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-api/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-jersey2/0.21.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/atomix-storage/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/blueprint/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-api/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-client/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-dom-api/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-mgmt-api/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/eos-dom-akka/11.0.0 2025-08-16T03:23:30,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-api/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-journal/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-spi/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/repackaged-pekko/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-raft/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-segmented-journal/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-api/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-impl/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-clustering-commons/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-common-util/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-distributed-datastore/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-remoterpc-connector/11.0.0 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-api/7.1.4 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-impl/7.1.4 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-shell/7.1.4 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-api/7.1.4 2025-08-16T03:23:30,987 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-impl/7.1.4 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/infrautils-util/7.1.4 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-dom-adapter/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-util/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-crypt-hash/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-encryption-algs/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-key-exchange-algs/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-mac-algs/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-public-key-algs/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-tls-cipher-suite-algs/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6241/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6243/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6470/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-inet-types/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-yang-types/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7407-ietf-x509-cert-to-name/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7952/14.0.13 2025-08-16T03:23:30,988 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf-monitoring/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8072/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8341/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-datastores/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-origin/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8343/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8344/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8525/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8526/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8528/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8529/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8639/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8650/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9640/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9641/14.0.13 2025-08-16T03:23:30,989 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9642/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-client/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-common/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-server/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-client/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-common/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-server/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-client/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-common/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-server/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-adapter/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-api/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-spi/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-common-api/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-api/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-broker/14.0.13 2025-08-16T03:23:30,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-schema-osgi/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-spi/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-api/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-common-api/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-dom-api/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-api/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-impl/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/general-entity/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-topology/2013.10.21.26.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-type-util/14.0.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/opendaylight-l2-types/2013.08.27.26.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/yang-ext/2013.09.07.26.13 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/databind/9.0.0 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-dom-api/9.0.0 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-api/9.0.0 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-none/9.0.0 2025-08-16T03:23:30,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/draft-ietf-restconf-server/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/rfc5277/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/sal-remote/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-api/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-common-mdsal/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/odl-device-notification/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-api/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-mdsal-spi/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-nb/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-api/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-jaxrs/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-mdsal/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-spi/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-subscription/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/sal-remote-impl/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/shaded-sshd/9.0.0 2025-08-16T03:23:30,992 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-api/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-http/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-ssh/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tcp/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tls/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-api/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-none/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/yanglib-mdsal-writer/9.0.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/bundles-diag/14.1.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-api/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-api/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-impl/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/bulk-o-matic/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/device-ownership-service/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/forwardingrules-manager/0.20.0 2025-08-16T03:23:30,993 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/lldp-speaker/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/of-switch-config-pusher/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/reconciliation-framework/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-lldp-discovery/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-manager/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-blueprint-config/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-common/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-api/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-onf/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-impl/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.libraries/liblldp/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-base/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-service/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-statistics/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-inventory/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-topology/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-blueprint-config/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-api/0.20.0 2025-08-16T03:23:30,994 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-impl/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-spi/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-util/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-api/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-impl/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-shell/0.20.0 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-api/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-dynamic/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-osgi/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-spi/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-generator/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-loader/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-model/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-reflect/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-api/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-osgi/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-spi/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-spec/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/codegen-extensions/14.0.14 2025-08-16T03:23:30,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/concepts/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-parser-support/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-model-api/14.0.14 2025-08-16T03:23:30,996 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-parser-support/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-model-api/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-parser-support/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/util/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common-netty/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-api/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-binfmt/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-gson/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-xml/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-impl/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-spi/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-transform/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-api/14.0.14 2025-08-16T03:23:30,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-ri/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-spi/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-util/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-ir/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-api/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-export/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-ri/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-spi/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-util/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-api/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-impl/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-reactor/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-rfc7950/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-spi/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-api/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-fs/14.0.14 2025-08-16T03:23:30,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-spi/14.0.14 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-api/14.0.14 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-impl/14.0.14 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-war/2.6.16/jar/uber 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-api/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-el2/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-servlet31/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-war/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jetty/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jsp/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-runtime/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-spi/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-tomcat-common/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-websocket/8.0.30 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.component/1.5.1 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.owasp.encoder/encoder/1.3.1 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang.modules/scala-parser-combinators_2.13/1.1.2 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-library/2.13.16 2025-08-16T03:23:30,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-reflect/2.13.16 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.reactivestreams/reactive-streams/1.0.4 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.codehaus.woodstox/stax2-api/4.2.2 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:tech.pantheon.triemap/triemap/1.3.2 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:net.java.dev.stax-utils/stax-utils/20070216 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:org.lmdbjava/lmdbjava/0.7.0 2025-08-16T03:23:31,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Stopping bundles: 2025-08-16T03:23:31,001 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 2025-08-16T03:23:31,002 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-08-16T03:23:31,002 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 2025-08-16T03:23:31,002 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 2025-08-16T03:23:31,003 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 2025-08-16T03:23:31,003 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-08-16T03:23:31,003 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 2025-08-16T03:23:31,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Uninstalling bundles: 2025-08-16T03:23:31,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 2025-08-16T03:23:31,006 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Installing bundles: 2025-08-16T03:23:31,006 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.checkerframework/checker-qual/3.49.3 2025-08-16T03:23:31,008 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.code.gson/gson/2.13.1 2025-08-16T03:23:31,009 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/guava/33.4.8-jre 2025-08-16T03:23:31,014 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.google.guava/failureaccess/1.0.3 2025-08-16T03:23:31,015 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.googlecode.json-simple/json-simple/1.1.1 2025-08-16T03:23:31,015 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.h2database/h2/2.3.232 2025-08-16T03:23:31,020 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.rabbitmq/amqp-client/5.25.0 2025-08-16T03:23:31,022 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/config/1.4.3 2025-08-16T03:23:31,023 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:com.typesafe/ssl-config-core_2.13/0.6.1 2025-08-16T03:23:31,024 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-client/1.38.1 2025-08-16T03:23:31,025 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.aeron/aeron-driver/1.38.1 2025-08-16T03:23:31,027 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-core/4.2.32 2025-08-16T03:23:31,028 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-graphite/4.2.32 2025-08-16T03:23:31,029 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-healthchecks/4.2.32 2025-08-16T03:23:31,030 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jmx/4.2.32 2025-08-16T03:23:31,031 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.dropwizard.metrics/metrics-jvm/4.2.32 2025-08-16T03:23:31,951 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-buffer/4.2.2.Final 2025-08-16T03:23:31,955 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-base/4.2.2.Final 2025-08-16T03:23:31,957 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-compression/4.2.2.Final 2025-08-16T03:23:31,959 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http/4.2.2.Final 2025-08-16T03:23:31,963 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-codec-http2/4.2.2.Final 2025-08-16T03:23:31,964 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-common/4.2.2.Final 2025-08-16T03:23:31,966 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-handler/4.2.2.Final 2025-08-16T03:23:31,968 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-resolver/4.2.2.Final 2025-08-16T03:23:31,969 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport/4.2.2.Final 2025-08-16T03:23:31,970 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-classes-epoll/4.2.2.Final 2025-08-16T03:23:31,971 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-epoll/4.2.2.Final/jar/linux-x86_64 2025-08-16T03:23:31,973 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:io.netty/netty-transport-native-unix-common/4.2.2.Final 2025-08-16T03:23:31,974 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.activation/jakarta.activation-api/1.2.2 2025-08-16T03:23:31,975 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.annotation/jakarta.annotation-api/1.3.5 2025-08-16T03:23:31,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.servlet/jakarta.servlet-api/4.0.4 2025-08-16T03:23:31,976 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.validation/jakarta.validation-api/2.0.2 2025-08-16T03:23:31,977 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.ws.rs/jakarta.ws.rs-api/2.1.6 2025-08-16T03:23:31,978 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.javassist/javassist/3.30.2-GA 2025-08-16T03:23:31,980 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:javax.servlet/javax.servlet-api/3.1.0 2025-08-16T03:23:31,981 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:jakarta.websocket/jakarta.websocket-api/1.1.2 2025-08-16T03:23:31,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/karaf.branding/14.1.0 2025-08-16T03:23:31,982 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.lz4/lz4-java/1.8.0 2025-08-16T03:23:31,984 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:net.bytebuddy/byte-buddy/1.17.5 2025-08-16T03:23:31,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.agrona/agrona/1.15.2 2025-08-16T03:23:31,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.antlr/antlr4-runtime/4.13.2 2025-08-16T03:23:31,999 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1 2025-08-16T03:23:32,000 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.3.2 2025-08-16T03:23:32,001 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.10.3 2025-08-16T03:23:32,003 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.api/1.1.5 2025-08-16T03:23:32,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.api/1.2.0 2025-08-16T03:23:32,005 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.blueprint.core/1.2.0 2025-08-16T03:23:32,005 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.core/1.1.8 2025-08-16T03:23:32,006 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.jmx/org.apache.aries.jmx.whiteboard/1.2.0 2025-08-16T03:23:32,007 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.proxy/org.apache.aries.proxy/1.1.14 2025-08-16T03:23:32,008 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries.quiesce/org.apache.aries.quiesce.api/1.0.0 2025-08-16T03:23:32,009 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.aries/org.apache.aries.util/1.1.3 2025-08-16T03:23:32,029 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-collections/commons-collections/3.2.2 2025-08-16T03:23:32,030 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-beanutils/commons-beanutils/1.11.0 2025-08-16T03:23:32,032 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:commons-codec/commons-codec/1.15 2025-08-16T03:23:32,034 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-lang3/3.17.0 2025-08-16T03:23:32,035 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.commons/commons-text/1.13.0 2025-08-16T03:23:32,036 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.felix/org.apache.felix.scr/2.2.6 2025-08-16T03:23:32,038 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.geronimo.specs/geronimo-atinject_1.0_spec/1.2 2025-08-16T03:23:32,038 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.blueprintstate/4.4.7 2025-08-16T03:23:32,039 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.bundle/org.apache.karaf.bundle.core/4.4.7 2025-08-16T03:23:32,040 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.config/org.apache.karaf.config.command/4.4.7 2025-08-16T03:23:32,042 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.blueprint/4.4.7 2025-08-16T03:23:32,043 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.features/4.4.7 2025-08-16T03:23:32,043 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.kar/4.4.7 2025-08-16T03:23:32,048 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.wrap/4.4.7 2025-08-16T03:23:32,049 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.boot/4.4.7 2025-08-16T03:23:32,050 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.diagnostic/org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T03:23:32,050 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.features/org.apache.karaf.features.command/4.4.7 2025-08-16T03:23:32,052 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.http/org.apache.karaf.http.core/4.4.7 2025-08-16T03:23:32,054 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.instance/org.apache.karaf.instance.core/4.4.7 2025-08-16T03:23:32,055 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:32,056 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.config/4.4.7 2025-08-16T03:23:32,057 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jaas/org.apache.karaf.jaas.modules/4.4.7 2025-08-16T03:23:32,059 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.jdbc/org.apache.karaf.jdbc.core/4.4.7 2025-08-16T03:23:32,060 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.kar/org.apache.karaf.kar.core/4.4.7 2025-08-16T03:23:32,061 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.log/org.apache.karaf.log.core/4.4.7 2025-08-16T03:23:32,062 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.management/org.apache.karaf.management.server/4.4.7 2025-08-16T03:23:32,063 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.package/org.apache.karaf.package.core/4.4.7 2025-08-16T03:23:32,064 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.management/4.4.7 2025-08-16T03:23:32,065 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.scr/org.apache.karaf.scr.state/4.4.7 2025-08-16T03:23:32,066 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.service/org.apache.karaf.service.core/4.4.7 2025-08-16T03:23:32,067 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.commands/4.4.7 2025-08-16T03:23:32,068 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.4.7 2025-08-16T03:23:32,069 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.core/4.4.7 2025-08-16T03:23:32,071 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.4.7 2025-08-16T03:23:32,072 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.shell/org.apache.karaf.shell.table/4.4.7 2025-08-16T03:23:32,073 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.system/org.apache.karaf.system.core/4.4.7 2025-08-16T03:23:32,074 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.karaf.web/org.apache.karaf.web.core/4.4.7 2025-08-16T03:23:32,075 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-osgi/2.14.0 2025-08-16T03:23:32,080 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-scp/2.14.0 2025-08-16T03:23:32,081 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.apache.sshd/sshd-sftp/2.14.0 2025-08-16T03:23:32,088 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jdt/ecj/3.26.0 2025-08-16T03:23:32,093 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-client/9.4.57.v20241219 2025-08-16T03:23:32,095 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-continuation/9.4.57.v20241219 2025-08-16T03:23:32,095 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-http/9.4.57.v20241219 2025-08-16T03:23:32,096 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-io/9.4.57.v20241219 2025-08-16T03:23:32,097 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jaas/9.4.57.v20241219 2025-08-16T03:23:32,098 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-jmx/9.4.57.v20241219 2025-08-16T03:23:32,099 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-security/9.4.57.v20241219 2025-08-16T03:23:32,100 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-server/9.4.57.v20241219 2025-08-16T03:23:32,102 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlet/9.4.57.v20241219 2025-08-16T03:23:32,103 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-servlets/9.4.57.v20241219 2025-08-16T03:23:32,104 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util/9.4.57.v20241219 2025-08-16T03:23:32,105 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-util-ajax/9.4.57.v20241219 2025-08-16T03:23:32,106 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.eclipse.jetty/jetty-xml/9.4.57.v20241219 2025-08-16T03:23:32,107 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-api/2.6.1 2025-08-16T03:23:32,108 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2.external/aopalliance-repackaged/2.6.1 2025-08-16T03:23:32,109 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-locator/2.6.1 2025-08-16T03:23:32,110 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/osgi-resource-locator/1.0.3 2025-08-16T03:23:32,111 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.hk2/hk2-utils/2.6.1 2025-08-16T03:23:32,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet/2.47 2025-08-16T03:23:32,112 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.containers/jersey-container-servlet-core/2.47 2025-08-16T03:23:32,113 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-client/2.47 2025-08-16T03:23:32,114 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-common/2.47 2025-08-16T03:23:32,118 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.core/jersey-server/2.47 2025-08-16T03:23:32,120 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.inject/jersey-hk2/2.47 2025-08-16T03:23:32,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.glassfish.jersey.media/jersey-media-sse/2.47 2025-08-16T03:23:32,122 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jline/jline/3.21.0 2025-08-16T03:23:32,124 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jolokia/jolokia-osgi/1.7.2 2025-08-16T03:23:32,125 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.jspecify/jspecify/1.0.0 2025-08-16T03:23:32,126 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm/9.7.1 2025-08-16T03:23:32,127 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-commons/9.7.1 2025-08-16T03:23:32,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-tree/9.7.1 2025-08-16T03:23:32,128 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-analysis/9.7.1 2025-08-16T03:23:32,129 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ow2.asm/asm-util/9.7.1 2025-08-16T03:23:32,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-authn-api/0.21.0 2025-08-16T03:23:32,130 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-cert/0.21.0 2025-08-16T03:23:32,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service/0.21.0 2025-08-16T03:23:32,132 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-encrypt-service-impl/0.21.0 2025-08-16T03:23:32,133 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-filterchain/0.21.0 2025-08-16T03:23:32,134 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-idm-store-h2/0.21.0 2025-08-16T03:23:32,135 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-jetty-auth-log-filter/0.21.0 2025-08-16T03:23:32,135 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-api/0.21.0 2025-08-16T03:23:32,136 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-password-service-impl/0.21.0 2025-08-16T03:23:32,137 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/repackaged-shiro/0.21.0 2025-08-16T03:23:32,139 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro/0.21.0 2025-08-16T03:23:32,141 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-shiro-api/0.21.0 2025-08-16T03:23:32,142 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa/aaa-tokenauthrealm/0.21.0 2025-08-16T03:23:32,143 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-api/0.21.0 2025-08-16T03:23:32,143 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/web-osgi-impl/0.21.0 2025-08-16T03:23:32,144 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-api/0.21.0 2025-08-16T03:23:32,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.aaa.web/servlet-jersey2/0.21.0 2025-08-16T03:23:32,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/atomix-storage/11.0.0 2025-08-16T03:23:32,147 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/blueprint/11.0.0 2025-08-16T03:23:32,148 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-api/11.0.0 2025-08-16T03:23:32,149 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-access-client/11.0.0 2025-08-16T03:23:32,150 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-dom-api/11.0.0 2025-08-16T03:23:32,151 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/cds-mgmt-api/11.0.0 2025-08-16T03:23:32,152 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/eos-dom-akka/11.0.0 2025-08-16T03:23:32,153 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-api/11.0.0 2025-08-16T03:23:32,154 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-journal/11.0.0 2025-08-16T03:23:32,155 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/raft-spi/11.0.0 2025-08-16T03:23:32,155 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/repackaged-pekko/11.0.0 2025-08-16T03:23:32,180 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-raft/11.0.0 2025-08-16T03:23:32,182 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-akka-segmented-journal/11.0.0 2025-08-16T03:23:32,183 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-api/11.0.0 2025-08-16T03:23:32,184 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-cluster-admin-impl/11.0.0 2025-08-16T03:23:32,185 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-clustering-commons/11.0.0 2025-08-16T03:23:32,186 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-common-util/11.0.0 2025-08-16T03:23:32,187 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-distributed-datastore/11.0.0 2025-08-16T03:23:32,189 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.controller/sal-remoterpc-connector/11.0.0 2025-08-16T03:23:32,190 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-api/7.1.4 2025-08-16T03:23:32,191 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-impl/7.1.4 2025-08-16T03:23:32,192 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/diagstatus-shell/7.1.4 2025-08-16T03:23:32,193 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-api/7.1.4 2025-08-16T03:23:32,193 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/ready-impl/7.1.4 2025-08-16T03:23:32,194 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.infrautils/infrautils-util/7.1.4 2025-08-16T03:23:32,195 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-dom-adapter/14.0.13 2025-08-16T03:23:32,196 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-util/14.0.13 2025-08-16T03:23:32,197 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-crypt-hash/14.0.13 2025-08-16T03:23:32,198 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-encryption-algs/14.0.13 2025-08-16T03:23:32,198 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-key-exchange-algs/14.0.13 2025-08-16T03:23:32,199 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-mac-algs/14.0.13 2025-08-16T03:23:32,200 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-ssh-public-key-algs/14.0.13 2025-08-16T03:23:32,200 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.iana/iana-tls-cipher-suite-algs/14.0.13 2025-08-16T03:23:32,201 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6241/14.0.13 2025-08-16T03:23:32,203 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6243/14.0.13 2025-08-16T03:23:32,204 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6470/14.0.13 2025-08-16T03:23:32,205 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-inet-types/14.0.13 2025-08-16T03:23:32,206 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc6991-ietf-yang-types/14.0.13 2025-08-16T03:23:32,207 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7407-ietf-x509-cert-to-name/14.0.13 2025-08-16T03:23:32,207 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc7952/14.0.13 2025-08-16T03:23:32,208 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf/14.0.13 2025-08-16T03:23:32,209 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8040-ietf-restconf-monitoring/14.0.13 2025-08-16T03:23:32,210 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8072/14.0.13 2025-08-16T03:23:32,211 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8341/14.0.13 2025-08-16T03:23:32,212 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-datastores/14.0.13 2025-08-16T03:23:32,212 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8342-ietf-origin/14.0.13 2025-08-16T03:23:32,213 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8343/14.0.13 2025-08-16T03:23:32,214 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8344/14.0.13 2025-08-16T03:23:32,215 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8525/14.0.13 2025-08-16T03:23:32,216 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8526/14.0.13 2025-08-16T03:23:32,218 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8528/14.0.13 2025-08-16T03:23:32,218 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8529/14.0.13 2025-08-16T03:23:32,219 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8639/14.0.13 2025-08-16T03:23:32,221 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc8650/14.0.13 2025-08-16T03:23:32,222 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9640/14.0.13 2025-08-16T03:23:32,223 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9641/14.0.13 2025-08-16T03:23:32,225 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9642/14.0.13 2025-08-16T03:23:32,226 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-client/14.0.13 2025-08-16T03:23:32,227 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-common/14.0.13 2025-08-16T03:23:32,228 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9643-ietf-tcp-server/14.0.13 2025-08-16T03:23:32,229 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-client/14.0.13 2025-08-16T03:23:32,230 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-common/14.0.13 2025-08-16T03:23:32,231 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9644-ietf-ssh-server/14.0.13 2025-08-16T03:23:32,232 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-client/14.0.13 2025-08-16T03:23:32,234 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-common/14.0.13 2025-08-16T03:23:32,235 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.binding.model.ietf/rfc9645-ietf-tls-server/14.0.13 2025-08-16T03:23:32,237 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-adapter/14.0.13 2025-08-16T03:23:32,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-api/14.0.13 2025-08-16T03:23:32,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-binding-spi/14.0.13 2025-08-16T03:23:32,239 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-common-api/14.0.13 2025-08-16T03:23:32,240 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-api/14.0.13 2025-08-16T03:23:32,241 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-broker/14.0.13 2025-08-16T03:23:32,242 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-schema-osgi/14.0.13 2025-08-16T03:23:32,243 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-dom-spi/14.0.13 2025-08-16T03:23:32,243 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-binding-api/14.0.13 2025-08-16T03:23:32,244 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-common-api/14.0.13 2025-08-16T03:23:32,245 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-eos-dom-api/14.0.13 2025-08-16T03:23:32,246 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-api/14.0.13 2025-08-16T03:23:32,246 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal/mdsal-singleton-impl/14.0.13 2025-08-16T03:23:32,247 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/general-entity/14.0.13 2025-08-16T03:23:32,248 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-topology/2013.10.21.26.13 2025-08-16T03:23:32,249 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/ietf-type-util/14.0.13 2025-08-16T03:23:32,250 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/opendaylight-l2-types/2013.08.27.26.13 2025-08-16T03:23:32,250 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.mdsal.model/yang-ext/2013.09.07.26.13 2025-08-16T03:23:32,251 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/databind/9.0.0 2025-08-16T03:23:32,252 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-dom-api/9.0.0 2025-08-16T03:23:32,253 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-api/9.0.0 2025-08-16T03:23:32,253 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/keystore-none/9.0.0 2025-08-16T03:23:32,254 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/draft-ietf-restconf-server/9.0.0 2025-08-16T03:23:32,256 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/rfc5277/9.0.0 2025-08-16T03:23:32,257 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf.model/sal-remote/9.0.0 2025-08-16T03:23:32,258 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-api/9.0.0 2025-08-16T03:23:32,259 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/netconf-common-mdsal/9.0.0 2025-08-16T03:23:32,260 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/odl-device-notification/9.0.0 2025-08-16T03:23:32,260 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-api/9.0.0 2025-08-16T03:23:32,261 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-mdsal-spi/9.0.0 2025-08-16T03:23:32,262 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-nb/9.0.0 2025-08-16T03:23:32,263 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server/9.0.0 2025-08-16T03:23:32,264 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-api/9.0.0 2025-08-16T03:23:32,265 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-jaxrs/9.0.0 2025-08-16T03:23:32,266 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-mdsal/9.0.0 2025-08-16T03:23:32,267 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-server-spi/9.0.0 2025-08-16T03:23:32,268 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/restconf-subscription/9.0.0 2025-08-16T03:23:32,269 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/sal-remote-impl/9.0.0 2025-08-16T03:23:32,270 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/shaded-sshd/9.0.0 2025-08-16T03:23:32,275 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-api/9.0.0 2025-08-16T03:23:32,276 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-http/9.0.0 2025-08-16T03:23:32,279 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-ssh/9.0.0 2025-08-16T03:23:32,280 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tcp/9.0.0 2025-08-16T03:23:32,281 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/transport-tls/9.0.0 2025-08-16T03:23:32,282 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-api/9.0.0 2025-08-16T03:23:32,283 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/truststore-none/9.0.0 2025-08-16T03:23:32,283 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.netconf/yanglib-mdsal-writer/9.0.0 2025-08-16T03:23:32,284 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.odlparent/bundles-diag/14.1.0 2025-08-16T03:23:32,285 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin/0.20.0 2025-08-16T03:23:32,288 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-api/0.20.0 2025-08-16T03:23:32,290 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-api/0.20.0 2025-08-16T03:23:32,291 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/arbitratorreconciliation-impl/0.20.0 2025-08-16T03:23:32,292 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/bulk-o-matic/0.20.0 2025-08-16T03:23:32,293 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/device-ownership-service/0.20.0 2025-08-16T03:23:32,294 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/forwardingrules-manager/0.20.0 2025-08-16T03:23:32,295 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/lldp-speaker/0.20.0 2025-08-16T03:23:32,296 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/of-switch-config-pusher/0.20.0 2025-08-16T03:23:32,297 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/reconciliation-framework/0.20.0 2025-08-16T03:23:32,298 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-lldp-discovery/0.20.0 2025-08-16T03:23:32,299 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.applications/topology-manager/0.20.0 2025-08-16T03:23:32,300 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-blueprint-config/0.20.0 2025-08-16T03:23:32,301 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-common/0.20.0 2025-08-16T03:23:32,301 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-api/0.20.0 2025-08-16T03:23:32,304 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-extension-onf/0.20.0 2025-08-16T03:23:32,306 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/openflowplugin-impl/0.20.0 2025-08-16T03:23:32,310 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.libraries/liblldp/0.20.0 2025-08-16T03:23:32,311 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-base/0.20.0 2025-08-16T03:23:32,317 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-service/0.20.0 2025-08-16T03:23:32,322 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-flow-statistics/0.20.0 2025-08-16T03:23:32,326 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-inventory/0.20.0 2025-08-16T03:23:32,326 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.model/model-topology/0.20.0 2025-08-16T03:23:32,327 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-blueprint-config/0.20.0 2025-08-16T03:23:32,328 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-api/0.20.0 2025-08-16T03:23:32,337 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-impl/0.20.0 2025-08-16T03:23:32,340 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflow-protocol-spi/0.20.0 2025-08-16T03:23:32,341 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin.openflowjava/openflowjava-util/0.20.0 2025-08-16T03:23:32,342 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-api/0.20.0 2025-08-16T03:23:32,343 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-impl/0.20.0 2025-08-16T03:23:32,344 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.openflowplugin/srm-shell/0.20.0 2025-08-16T03:23:32,345 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-api/14.0.14 2025-08-16T03:23:32,346 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-dynamic/14.0.14 2025-08-16T03:23:32,347 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-osgi/14.0.14 2025-08-16T03:23:32,348 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-data-codec-spi/14.0.14 2025-08-16T03:23:32,349 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-generator/14.0.14 2025-08-16T03:23:32,349 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-loader/14.0.14 2025-08-16T03:23:32,350 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-model/14.0.14 2025-08-16T03:23:32,351 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-reflect/14.0.14 2025-08-16T03:23:32,352 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-api/14.0.14 2025-08-16T03:23:32,352 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-osgi/14.0.14 2025-08-16T03:23:32,353 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-runtime-spi/14.0.14 2025-08-16T03:23:32,354 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/binding-spec/14.0.14 2025-08-16T03:23:32,355 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/codegen-extensions/14.0.14 2025-08-16T03:23:32,356 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/concepts/14.0.14 2025-08-16T03:23:32,356 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-model-api/14.0.14 2025-08-16T03:23:32,357 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/odlext-parser-support/14.0.14 2025-08-16T03:23:32,357 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-model-api/14.0.14 2025-08-16T03:23:32,358 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/openconfig-parser-support/14.0.14 2025-08-16T03:23:32,359 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-model-api/14.0.14 2025-08-16T03:23:32,360 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6241-parser-support/14.0.14 2025-08-16T03:23:32,360 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-model-api/14.0.14 2025-08-16T03:23:32,361 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6536-parser-support/14.0.14 2025-08-16T03:23:32,361 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-model-api/14.0.14 2025-08-16T03:23:32,362 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc6643-parser-support/14.0.14 2025-08-16T03:23:32,363 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-model-api/14.0.14 2025-08-16T03:23:32,363 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc7952-parser-support/14.0.14 2025-08-16T03:23:32,364 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-model-api/14.0.14 2025-08-16T03:23:32,365 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8040-parser-support/14.0.14 2025-08-16T03:23:32,366 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-model-api/14.0.14 2025-08-16T03:23:32,366 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8528-parser-support/14.0.14 2025-08-16T03:23:32,367 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-model-api/14.0.14 2025-08-16T03:23:32,367 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8639-parser-support/14.0.14 2025-08-16T03:23:32,368 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-model-api/14.0.14 2025-08-16T03:23:32,369 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/rfc8819-parser-support/14.0.14 2025-08-16T03:23:32,369 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/util/14.0.14 2025-08-16T03:23:32,370 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common/14.0.14 2025-08-16T03:23:32,371 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-common-netty/14.0.14 2025-08-16T03:23:32,372 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-api/14.0.14 2025-08-16T03:23:32,373 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-binfmt/14.0.14 2025-08-16T03:23:32,374 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-gson/14.0.14 2025-08-16T03:23:32,375 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-codec-xml/14.0.14 2025-08-16T03:23:32,375 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-impl/14.0.14 2025-08-16T03:23:32,376 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-spi/14.0.14 2025-08-16T03:23:32,377 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-transform/14.0.14 2025-08-16T03:23:32,378 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-api/14.0.14 2025-08-16T03:23:32,378 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-ri/14.0.14 2025-08-16T03:23:32,379 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-tree-spi/14.0.14 2025-08-16T03:23:32,380 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-data-util/14.0.14 2025-08-16T03:23:32,381 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-ir/14.0.14 2025-08-16T03:23:32,382 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-api/14.0.14 2025-08-16T03:23:32,383 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-export/14.0.14 2025-08-16T03:23:32,384 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-ri/14.0.14 2025-08-16T03:23:32,385 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-spi/14.0.14 2025-08-16T03:23:32,386 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-model-util/14.0.14 2025-08-16T03:23:32,386 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-api/14.0.14 2025-08-16T03:23:32,387 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-impl/14.0.14 2025-08-16T03:23:32,388 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-reactor/14.0.14 2025-08-16T03:23:32,389 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-rfc7950/14.0.14 2025-08-16T03:23:32,391 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-parser-spi/14.0.14 2025-08-16T03:23:32,391 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-api/14.0.14 2025-08-16T03:23:32,392 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-fs/14.0.14 2025-08-16T03:23:32,393 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-repo-spi/14.0.14 2025-08-16T03:23:32,394 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-api/14.0.14 2025-08-16T03:23:32,394 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.opendaylight.yangtools/yang-xpath-impl/14.0.14 2025-08-16T03:23:32,395 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.url/pax-url-war/2.6.16/jar/uber 2025-08-16T03:23:32,399 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-api/8.0.30 2025-08-16T03:23:32,400 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-el2/8.0.30 2025-08-16T03:23:32,400 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-compatibility-servlet31/8.0.30 2025-08-16T03:23:32,401 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-war/8.0.30 2025-08-16T03:23:32,402 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/8.0.30 2025-08-16T03:23:32,402 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jetty/8.0.30 2025-08-16T03:23:32,403 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-jsp/8.0.30 2025-08-16T03:23:32,406 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-runtime/8.0.30 2025-08-16T03:23:32,407 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-spi/8.0.30 2025-08-16T03:23:32,408 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-tomcat-common/8.0.30 2025-08-16T03:23:32,410 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.ops4j.pax.web/pax-web-websocket/8.0.30 2025-08-16T03:23:32,410 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.osgi/org.osgi.service.component/1.5.1 2025-08-16T03:23:32,411 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.owasp.encoder/encoder/1.3.1 2025-08-16T03:23:32,411 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang.modules/scala-parser-combinators_2.13/1.1.2 2025-08-16T03:23:32,412 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-library/2.13.16 2025-08-16T03:23:32,421 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.scala-lang/scala-reflect/2.13.16 2025-08-16T03:23:32,426 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.reactivestreams/reactive-streams/1.0.4 2025-08-16T03:23:32,426 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:org.codehaus.woodstox/stax2-api/4.2.2 2025-08-16T03:23:32,427 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | mvn:tech.pantheon.triemap/triemap/1.3.2 2025-08-16T03:23:32,428 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:net.java.dev.stax-utils/stax-utils/20070216 2025-08-16T03:23:32,429 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap:mvn:org.lmdbjava/lmdbjava/0.7.0 2025-08-16T03:23:32,909 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-password-service-config.xml 2025-08-16T03:23:33,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/configuration/factory/pekko.conf 2025-08-16T03:23:33,637 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.controller.cluster.datastore.cfg 2025-08-16T03:23:33,648 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0//etc/org.jolokia.osgi.cfg 2025-08-16T03:23:33,649 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg 2025-08-16T03:23:33,649 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/default-openflow-connection-config.xml 2025-08-16T03:23:33,650 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/legacy-openflow-connection-config.xml 2025-08-16T03:23:33,650 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-cert-config.xml 2025-08-16T03:23:33,651 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/jetty-web.xml 2025-08-16T03:23:33,654 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/org.opendaylight.restconf.nb.rfc8040.cfg 2025-08-16T03:23:33,656 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-app-config.xml 2025-08-16T03:23:33,657 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/etc/opendaylight/datastore/initial/config/aaa-datastore-config.xml 2025-08-16T03:23:33,657 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0/bin/idmtool 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Creating configuration file /tmp/karaf-0.23.0//etc/org.opendaylight.aaa.filterchain.cfg 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Refreshing bundles: 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 (Attached fragments changed: [org.ops4j.pax.web.pax-web-compatibility-el2/8.0.30]) 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.enterprise.cdi-api/2.0.0.SP1 (Wired to javax.el-api/3.0.3 which is being refreshed) 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.transaction-api/1.2.0 (Wired to javax.enterprise.cdi-api/2.0.0.SP1 which is being refreshed) 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.jasypt/1.9.3.1 (Should be wired to: jakarta.servlet-api/4.0.0 (through [org.apache.servicemix.bundles.jasypt/1.9.3.1] osgi.wiring.package; resolution:=optional; filter:="(osgi.wiring.package=javax.servlet)")) 2025-08-16T03:23:33,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.servicemix.bundles.javax-inject/1.0.0.3 (Bundle will be uninstalled) 2025-08-16T03:23:33,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.config/1.5.7 (Wired to org.apache.servicemix.bundles.jasypt/1.9.3.1 which is being refreshed) 2025-08-16T03:23:33,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.jdbc.pool.common/1.5.7 (Wired to javax.transaction-api/1.2.0 which is being refreshed) 2025-08-16T03:23:34,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Starting bundles: 2025-08-16T03:23:34,647 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm/9.7.1 2025-08-16T03:23:34,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.tree/9.7.1 2025-08-16T03:23:34,658 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.quiesce.api/1.0.0 2025-08-16T03:23:34,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.api/1.0.1 2025-08-16T03:23:34,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.commons/9.7.1 2025-08-16T03:23:34,659 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.proxy/1.1.14 2025-08-16T03:23:34,668 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.core/1.10.3 2025-08-16T03:23:34,916 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.core/1.10.3 has been started 2025-08-16T03:23:34,929 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.tree.analysis/9.7.1 2025-08-16T03:23:35,289 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.restconf.nb.rfc8040} from /tmp/karaf-0.23.0/etc/org.opendaylight.restconf.nb.rfc8040.cfg 2025-08-16T03:23:35,290 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.objectweb.asm.util/9.7.1 2025-08-16T03:23:35,291 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.blueprint.cm/1.3.2 2025-08-16T03:23:35,293 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.jolokia.osgi} from /tmp/karaf-0.23.0/etc/org.jolokia.osgi.cfg 2025-08-16T03:23:35,296 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.controller.cluster.datastore} from /tmp/karaf-0.23.0/etc/org.opendaylight.controller.cluster.datastore.cfg 2025-08-16T03:23:35,298 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.openflowplugin} from /tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg 2025-08-16T03:23:35,299 | INFO | fileinstall-/tmp/karaf-0.23.0/etc | fileinstall | 6 - org.apache.felix.fileinstall - 3.7.4 | Creating configuration {org.opendaylight.aaa.filterchain} from /tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg 2025-08-16T03:23:35,310 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.cm/1.3.2 has been started 2025-08-16T03:23:35,312 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.blueprint/4.4.7 2025-08-16T03:23:35,317 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.wrap/4.4.7 2025-08-16T03:23:35,322 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.kar/4.4.7 2025-08-16T03:23:35,325 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.deployer.features/4.4.7 2025-08-16T03:23:35,343 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.osgi/2.14.0 2025-08-16T03:23:35,345 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.scp/2.14.0 2025-08-16T03:23:35,346 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.servlet-api/4.0.0 2025-08-16T03:23:35,346 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-api/8.0.30 2025-08-16T03:23:35,346 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.continuation/9.4.57.v20241219 2025-08-16T03:23:35,347 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.config/4.4.7 2025-08-16T03:23:35,355 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.modules/4.4.7 2025-08-16T03:23:35,361 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.sshd.sftp/2.14.0 2025-08-16T03:23:35,362 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jline/3.21.0 2025-08-16T03:23:35,362 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.core/4.4.7 2025-08-16T03:23:35,394 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.deployer.kar/4.4.7 2025-08-16T03:23:35,397 | INFO | features-3-thread-1 | Activator | 120 - org.apache.karaf.shell.core - 4.4.7 | Not starting local console. To activate set karaf.startLocalConsole=true 2025-08-16T03:23:35,443 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.karaf.shell.core/4.4.7 has been started 2025-08-16T03:23:35,444 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.ssh/4.4.7 2025-08-16T03:23:35,492 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.4.7. Missing service: [org.apache.sshd.server.SshServer] 2025-08-16T03:23:35,492 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.api/1.1.5 2025-08-16T03:23:35,496 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.core/1.1.8 2025-08-16T03:23:35,502 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Starting JMX OSGi agent 2025-08-16T03:23:35,513 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528] for service with service.id [15] 2025-08-16T03:23:35,515 | INFO | features-3-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528] for service with service.id [39] 2025-08-16T03:23:35,517 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.package.core/4.4.7 2025-08-16T03:23:35,526 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.package.core/4.4.7 2025-08-16T03:23:35,526 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.websocket-api/1.1.2 2025-08-16T03:23:35,528 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-spi/8.0.30 2025-08-16T03:23:35,528 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.el-api/3.0.3 2025-08-16T03:23:35,529 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jdt.core.compiler.batch/3.26.0.v20210609-0549 2025-08-16T03:23:35,530 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-jsp/8.0.30 2025-08-16T03:23:35,530 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-tomcat-common/8.0.30 2025-08-16T03:23:35,531 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-extender-war/8.0.30 2025-08-16T03:23:35,532 | INFO | features-3-thread-1 | Activator | 392 - org.ops4j.pax.web.pax-web-extender-war - 8.0.30 | Configuring WAR extender thread pool. Pool size = 3 2025-08-16T03:23:35,688 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.ssh/4.4.7 2025-08-16T03:23:35,695 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.bundle.core/4.4.7 2025-08-16T03:23:35,698 | INFO | activator-1-thread-1 | DefaultIoServiceFactoryFactory | 125 - org.apache.sshd.osgi - 2.14.0 | No detected/configured IoServiceFactoryFactory; using Nio2ServiceFactoryFactory 2025-08-16T03:23:35,718 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.bundle.core/4.4.7 2025-08-16T03:23:35,719 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.osgi.service.component/1.5.1.202212101352 2025-08-16T03:23:35,727 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.felix.scr/2.2.6 2025-08-16T03:23:35,738 | INFO | features-3-thread-1 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Starting with globalExtender setting: false 2025-08-16T03:23:35,743 | INFO | features-3-thread-1 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Version = 2.2.6 2025-08-16T03:23:35,752 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.scr.state/4.4.7 2025-08-16T03:23:35,791 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.blueprint.api/1.2.0 2025-08-16T03:23:35,793 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.blueprint.core/1.2.0 2025-08-16T03:23:35,795 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.util/9.4.57.v20241219 2025-08-16T03:23:35,796 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.xml/9.4.57.v20241219 2025-08-16T03:23:35,802 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.management.server/4.4.7 2025-08-16T03:23:35,854 | INFO | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | Setting java.rmi.server.hostname system property to 127.0.0.1 2025-08-16T03:23:35,854 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.scr.management/4.4.7 2025-08-16T03:23:35,863 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.jmx/9.4.57.v20241219 2025-08-16T03:23:35,865 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.io/9.4.57.v20241219 2025-08-16T03:23:35,865 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.http/9.4.57.v20241219 2025-08-16T03:23:35,866 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.server/9.4.57.v20241219 2025-08-16T03:23:35,866 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.security/9.4.57.v20241219 2025-08-16T03:23:35,866 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.util.ajax/9.4.57.v20241219 2025-08-16T03:23:35,867 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.servlet/9.4.57.v20241219 2025-08-16T03:23:35,867 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.jaas/9.4.57.v20241219 2025-08-16T03:23:35,867 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.servlets/9.4.57.v20241219 2025-08-16T03:23:35,868 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-jetty/8.0.30 2025-08-16T03:23:35,890 | INFO | features-3-thread-1 | log | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Logging initialized @11534ms to org.eclipse.jetty.util.log.Slf4jLog 2025-08-16T03:23:35,919 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-runtime/8.0.30 2025-08-16T03:23:35,951 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.web]) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because configuration has changed 2025-08-16T03:23:35,953 | INFO | features-3-thread-1 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | EventAdmin support enabled, WAB events will be posted to EventAdmin topics. 2025-08-16T03:23:35,953 | INFO | features-3-thread-1 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Pax Web Runtime started 2025-08-16T03:23:35,953 | INFO | paxweb-config-3-thread-1 (change config) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because ServerControllerFactory has been registered 2025-08-16T03:23:35,956 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.http.core/4.4.7 2025-08-16T03:23:35,956 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,957 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,959 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,959 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,959 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,959 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:35,959 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@78e79d4e with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=bc3fdd2f-27a7-4892-8392-998619247528 2025-08-16T03:23:36,040 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.http.core/4.4.7. Missing service: [org.apache.karaf.http.core.ProxyService] 2025-08-16T03:23:36,042 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.table/4.4.7 2025-08-16T03:23:36,064 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-08-16T03:23:36,066 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.service.core/4.4.7 2025-08-16T03:23:36,074 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.service.core/4.4.7 2025-08-16T03:23:36,074 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.log.core/4.4.7 2025-08-16T03:23:36,082 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Configuring server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T03:23:36,082 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Configuring JettyServerController{configuration=329cdb6c-5682-4396-8805-aab52fc6928f,state=UNCONFIGURED} 2025-08-16T03:23:36,082 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating Jetty server instance using configuration properties. 2025-08-16T03:23:36,092 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.log.core/4.4.7. Missing service: [org.apache.karaf.log.core.LogService] 2025-08-16T03:23:36,092 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T03:23:36,114 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T03:23:36,115 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.bundle.blueprintstate/4.4.7 2025-08-16T03:23:36,120 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Processing Jetty configuration from files: [etc/jetty.xml] 2025-08-16T03:23:36,121 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.features.command/4.4.7 2025-08-16T03:23:36,135 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.features.command/4.4.7 2025-08-16T03:23:36,123 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-08-16T03:23:36,138 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.eclipse.jetty.client/9.4.57.v20241219 2025-08-16T03:23:36,139 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.kar.core/4.4.7 2025-08-16T03:23:36,150 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.kar.core/4.4.7 2025-08-16T03:23:36,151 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.config.command/4.4.7 2025-08-16T03:23:36,158 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.config.command/4.4.7 2025-08-16T03:23:36,224 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.shell.commands/4.4.7 2025-08-16T03:23:36,237 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T03:23:36,238 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T03:23:36,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-websocket/8.0.30 2025-08-16T03:23:36,241 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.system.core/4.4.7 2025-08-16T03:23:36,249 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.system.core/4.4.7 2025-08-16T03:23:36,249 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.web.core/4.4.7 2025-08-16T03:23:36,257 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.web.core/4.4.7. Missing service: [org.apache.karaf.web.WebContainerService] 2025-08-16T03:23:36,258 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.url.war/2.6.16 2025-08-16T03:23:36,264 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.instance.core/4.4.7 2025-08-16T03:23:36,283 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.instance.core/4.4.7 2025-08-16T03:23:36,284 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:36,325 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:36,326 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:36,327 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T03:23:36,327 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.ops4j.pax.web.pax-web-extender-whiteboard/8.0.30 2025-08-16T03:23:36,363 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Found configured connector "jetty-default": 0.0.0.0:8181 2025-08-16T03:23:36,365 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Using configured jetty-default@74d21d86{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} as non secure connector for address: 0.0.0.0:8181 2025-08-16T03:23:36,366 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Eagerly starting Jetty thread pool QueuedThreadPool[qtp415579349]@18c53cd5{STOPPED,0<=0<=200,i=0,r=-1,q=0}[NO_TRY] 2025-08-16T03:23:36,436 | INFO | paxweb-config-3-thread-1 (change controller) | JettyFactory | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding JMX support to Jetty server 2025-08-16T03:23:36,436 | INFO | features-3-thread-1 | Activator | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | Starting Pax Web Whiteboard Extender 2025-08-16T03:23:36,457 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.jmx.whiteboard/1.2.0 2025-08-16T03:23:36,475 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.blueprint/11.0.0 2025-08-16T03:23:36,477 | INFO | features-3-thread-1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Starting BlueprintBundleTracker 2025-08-16T03:23:36,483 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Starting server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T03:23:36,484 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting JettyServerController{configuration=329cdb6c-5682-4396-8805-aab52fc6928f,state=STOPPED} 2025-08-16T03:23:36,484 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Server@5e7e528a{STOPPED}[9.4.57.v20241219] 2025-08-16T03:23:36,485 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 21.0.5+11-Ubuntu-1ubuntu122.04 2025-08-16T03:23:36,487 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.karaf.shell.core_4.4.7 [120] was successfully created 2025-08-16T03:23:36,488 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.core_1.10.3 [79] was successfully created 2025-08-16T03:23:36,488 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.cm_1.3.2 [78] was successfully created 2025-08-16T03:23:36,502 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | DefaultSessionIdManager workerName=node0 2025-08-16T03:23:36,502 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | No SessionScavenger set, using defaults 2025-08-16T03:23:36,505 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | node0 Scavenging every 660000ms 2025-08-16T03:23:36,538 | INFO | paxweb-config-3-thread-1 (change controller) | AbstractConnector | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started jetty-default@74d21d86{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} 2025-08-16T03:23:36,539 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started @12187ms 2025-08-16T03:23:36,541 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpService factory 2025-08-16T03:23:36,548 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.http.core_4.4.7 [105]] 2025-08-16T03:23:36,549 | INFO | HttpService->WarExtender (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-war_8.0.30 [392]] 2025-08-16T03:23:36,557 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.web.core_4.4.7 [124]] 2025-08-16T03:23:36,562 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.http.core/4.4.7 2025-08-16T03:23:36,567 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.web.core/4.4.7 2025-08-16T03:23:36,570 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpServiceRuntime 2025-08-16T03:23:36,571 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard_8.0.30 [393]] 2025-08-16T03:23:36,576 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-08-16T03:23:36,576 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)}", size=2} 2025-08-16T03:23:36,576 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-08-16T03:23:36,614 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)} to o.o.p.w.s.j.i.PaxWebServletContextHandler@1d0954b9{/,null,STOPPED} 2025-08-16T03:23:36,618 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@1d0954b9{/,null,STOPPED} 2025-08-16T03:23:36,756 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.guava.failureaccess/1.0.3 2025-08-16T03:23:36,759 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.annotation-api/1.3.5 2025-08-16T03:23:36,769 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.guava/33.4.8.jre 2025-08-16T03:23:36,770 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.lang3/3.17.0 2025-08-16T03:23:36,771 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | checker-qual/3.49.3 2025-08-16T03:23:36,773 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.encrypt-service/0.21.0 2025-08-16T03:23:36,774 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.concepts/14.0.14 2025-08-16T03:23:36,775 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | triemap/1.3.2 2025-08-16T03:23:36,776 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.util/14.0.14 2025-08-16T03:23:36,777 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-common/14.0.14 2025-08-16T03:23:36,777 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-common-api/14.0.13 2025-08-16T03:23:36,778 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-spec/14.0.14 2025-08-16T03:23:36,779 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-binding-api/14.0.13 2025-08-16T03:23:36,780 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.geronimo.specs.geronimo-atinject_1.0_spec/1.2.0 2025-08-16T03:23:36,781 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-xpath-api/14.0.14 2025-08-16T03:23:36,781 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-api/14.0.14 2025-08-16T03:23:36,783 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-api/14.0.14 2025-08-16T03:23:36,783 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-api/14.0.14 2025-08-16T03:23:36,784 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-api/14.0.13 2025-08-16T03:23:36,784 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.odlext-model-api/14.0.14 2025-08-16T03:23:36,785 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-spi/14.0.14 2025-08-16T03:23:36,786 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-spi/14.0.13 2025-08-16T03:23:36,786 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-api/14.0.14 2025-08-16T03:23:36,787 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-model/14.0.14 2025-08-16T03:23:36,788 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8040-model-api/14.0.14 2025-08-16T03:23:36,788 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-api/14.0.14 2025-08-16T03:23:36,789 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-api/14.0.14 2025-08-16T03:23:36,790 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-spi/14.0.14 2025-08-16T03:23:36,791 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-reflect/14.0.14 2025-08-16T03:23:36,792 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-spi/14.0.14 2025-08-16T03:23:36,794 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8528-model-api/14.0.14 2025-08-16T03:23:36,794 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc7952-model-api/14.0.14 2025-08-16T03:23:36,795 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-ir/14.0.14 2025-08-16T03:23:36,795 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-spi/14.0.14 2025-08-16T03:23:36,796 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-util/14.0.14 2025-08-16T03:23:36,796 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-util/14.0.14 2025-08-16T03:23:36,797 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-impl/14.0.14 2025-08-16T03:23:36,797 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | net.bytebuddy.byte-buddy/1.17.5 2025-08-16T03:23:36,800 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-loader/14.0.14 2025-08-16T03:23:36,800 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-dynamic/14.0.14 2025-08-16T03:23:36,807 | INFO | features-3-thread-1 | SimpleBindingDOMCodecFactory | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Binding/DOM Codec enabled 2025-08-16T03:23:36,807 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-api/14.0.14 2025-08-16T03:23:36,809 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-spi/14.0.14 2025-08-16T03:23:36,809 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.odlext-parser-support/14.0.14 2025-08-16T03:23:36,810 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.openconfig-model-api/14.0.14 2025-08-16T03:23:36,810 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.openconfig-parser-support/14.0.14 2025-08-16T03:23:36,811 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6241-model-api/14.0.14 2025-08-16T03:23:36,811 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6241-parser-support/14.0.14 2025-08-16T03:23:36,811 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6536-model-api/14.0.14 2025-08-16T03:23:36,812 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6536-parser-support/14.0.14 2025-08-16T03:23:36,812 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6643-model-api/14.0.14 2025-08-16T03:23:36,813 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc6643-parser-support/14.0.14 2025-08-16T03:23:36,813 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-ri/14.0.14 2025-08-16T03:23:36,814 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc7952-parser-support/14.0.14 2025-08-16T03:23:36,814 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8040-parser-support/14.0.14 2025-08-16T03:23:36,815 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8528-parser-support/14.0.14 2025-08-16T03:23:36,817 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8639-model-api/14.0.14 2025-08-16T03:23:36,819 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8639-parser-support/14.0.14 2025-08-16T03:23:36,821 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8819-model-api/14.0.14 2025-08-16T03:23:36,821 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.rfc8819-parser-support/14.0.14 2025-08-16T03:23:36,823 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-spi/14.0.14 2025-08-16T03:23:36,823 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.antlr.antlr4-runtime/4.13.2 2025-08-16T03:23:36,824 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-reactor/14.0.14 2025-08-16T03:23:36,825 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-rfc7950/14.0.14 2025-08-16T03:23:36,826 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-xpath-impl/14.0.14 2025-08-16T03:23:36,831 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-parser-impl/14.0.14 2025-08-16T03:23:36,836 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-spi/14.0.14 2025-08-16T03:23:36,845 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-generator/14.0.14 2025-08-16T03:23:36,852 | INFO | features-3-thread-1 | DefaultBindingRuntimeGenerator | 328 - org.opendaylight.yangtools.binding-generator - 14.0.14 | Binding/YANG type support activated 2025-08-16T03:23:36,853 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-runtime-osgi/14.0.14 2025-08-16T03:23:36,863 | INFO | features-3-thread-1 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activating 2025-08-16T03:23:36,864 | INFO | features-3-thread-1 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activated 2025-08-16T03:23:36,869 | INFO | features-3-thread-1 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime starting 2025-08-16T03:23:36,908 | INFO | features-3-thread-1 | KarafFeaturesSupport | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Will attempt to integrate with Karaf FeaturesService 2025-08-16T03:23:37,611 | INFO | features-3-thread-1 | NettyTransportSupport | 284 - org.opendaylight.netconf.transport-api - 9.0.0 | Netty transport backed by epoll(2) 2025-08-16T03:23:37,853 | INFO | features-3-thread-1 | SharedEffectiveModelContextFactory | 379 - org.opendaylight.yangtools.yang-parser-impl - 14.0.14 | Using weak references 2025-08-16T03:23:40,084 | INFO | features-3-thread-1 | OSGiModuleInfoSnapshotImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | EffectiveModelContext generation 1 activated 2025-08-16T03:23:40,916 | INFO | features-3-thread-1 | OSGiBindingRuntimeContextImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | BindingRuntimeContext generation 1 activated 2025-08-16T03:23:40,917 | INFO | features-3-thread-1 | GlobalBindingRuntimeContext | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Global BindingRuntimeContext generation 1 activated 2025-08-16T03:23:40,918 | INFO | features-3-thread-1 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime started 2025-08-16T03:23:40,918 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.binding-data-codec-osgi/14.0.14 2025-08-16T03:23:40,928 | INFO | features-3-thread-1 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activating 2025-08-16T03:23:40,953 | INFO | features-3-thread-1 | OSGiBindingDOMCodecServicesImpl | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec generation 1 activated 2025-08-16T03:23:40,954 | INFO | features-3-thread-1 | GlobalBindingDOMCodecServices | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Global Binding/DOM Codec activated with generation 1 2025-08-16T03:23:40,956 | INFO | features-3-thread-1 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activated 2025-08-16T03:23:40,957 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding-dom-adapter/14.0.13 2025-08-16T03:23:40,975 | INFO | features-3-thread-1 | OSGiBlockingBindingNormalizer | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter activated 2025-08-16T03:23:40,984 | INFO | features-3-thread-1 | DynamicBindingAdapter | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | 8 DOMService trackers started 2025-08-16T03:23:40,986 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.encrypt-service-impl/0.21.0 2025-08-16T03:23:40,990 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.ietf-type-util/14.0.13 2025-08-16T03:23:40,991 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6991-ietf-yang-types/14.0.13 2025-08-16T03:23:40,995 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8343/14.0.13 2025-08-16T03:23:40,997 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.typesafe.config/1.4.3 2025-08-16T03:23:40,998 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.scala-library/2.13.16.v20250107-233423-VFINAL-3f6bdae 2025-08-16T03:23:41,001 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.typesafe.sslconfig/0.6.1 2025-08-16T03:23:41,002 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.agrona.core/1.15.2 2025-08-16T03:23:41,003 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.aeron.client/1.38.1 2025-08-16T03:23:41,004 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.aeron.driver/1.38.1 2025-08-16T03:23:41,005 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap_file__tmp_karaf-0.23.0_system_org_lmdbjava_lmdbjava_0.7.0_lmdbjava-0.7.0.jar/0.0.0 2025-08-16T03:23:41,005 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | reactive-streams/1.0.4 2025-08-16T03:23:41,005 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.repackaged-pekko/11.0.0 2025-08-16T03:23:41,009 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6991-ietf-inet-types/14.0.13 2025-08-16T03:23:41,010 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.yang-ext/2013.9.7.26_13 2025-08-16T03:23:41,011 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.inventory/0.20.0 2025-08-16T03:23:41,011 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.opendaylight-l2-types/2013.8.27.26_13 2025-08-16T03:23:41,012 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.buffer/4.2.2.Final 2025-08-16T03:23:41,012 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.codegen-extensions/14.0.14 2025-08-16T03:23:41,012 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-api/0.20.0 2025-08-16T03:23:41,013 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-base/0.20.0 2025-08-16T03:23:41,014 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-service/0.20.0 2025-08-16T03:23:41,015 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.flow-statistics/0.20.0 2025-08-16T03:23:41,016 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.core/4.2.32 2025-08-16T03:23:41,017 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.jmx/4.2.32 2025-08-16T03:23:41,017 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | lz4-java/1.8.0 2025-08-16T03:23:41,018 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-api/11.0.0 2025-08-16T03:23:41,018 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-spi/11.0.0 2025-08-16T03:23:41,019 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-binfmt/14.0.14 2025-08-16T03:23:41,020 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.rabbitmq.client/5.25.0 2025-08-16T03:23:41,021 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.shiro-api/0.21.0 2025-08-16T03:23:41,021 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-api/0.20.0 2025-08-16T03:23:41,022 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javax.servlet-api/3.1.0 2025-08-16T03:23:41,023 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.h2database/2.3.232 2025-08-16T03:23:41,035 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.ready-api/7.1.4 2025-08-16T03:23:41,037 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.resolver/4.2.2.Final 2025-08-16T03:23:41,038 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport/4.2.2.Final 2025-08-16T03:23:41,039 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport-native-unix-common/4.2.2.Final 2025-08-16T03:23:41,039 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-base/4.2.2.Final 2025-08-16T03:23:41,039 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.handler/4.2.2.Final 2025-08-16T03:23:41,047 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.shaded-sshd/9.0.0 2025-08-16T03:23:41,049 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.aries.util/1.1.3 2025-08-16T03:23:41,050 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.collections/3.2.2 2025-08-16T03:23:41,052 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.commons-beanutils/1.11.0 2025-08-16T03:23:41,053 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.owasp.encoder/1.3.1 2025-08-16T03:23:41,053 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.repackaged-shiro/0.21.0 2025-08-16T03:23:41,054 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8040-ietf-restconf/14.0.13 2025-08-16T03:23:41,055 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.text/1.13.0 2025-08-16T03:23:41,055 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8341/14.0.13 2025-08-16T03:23:41,056 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9640/14.0.13 2025-08-16T03:23:41,056 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-common/14.0.13 2025-08-16T03:23:41,057 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-client/14.0.13 2025-08-16T03:23:41,057 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-access-api/11.0.0 2025-08-16T03:23:41,058 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-cluster-admin-api/11.0.0 2025-08-16T03:23:41,058 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6241/14.0.13 2025-08-16T03:23:41,059 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.netconf-api/9.0.0 2025-08-16T03:23:41,059 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8344/14.0.13 2025-08-16T03:23:41,060 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | stax2-api/4.2.2 2025-08-16T03:23:41,061 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.transport-classes-epoll/4.2.2.Final 2025-08-16T03:23:41,061 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.google.gson/2.13.1 2025-08-16T03:23:41,062 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-api/7.1.4 2025-08-16T03:23:41,062 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-spi/0.20.0 2025-08-16T03:23:41,063 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.util/0.20.0 2025-08-16T03:23:41,063 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-common-netty/14.0.14 2025-08-16T03:23:41,064 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.util/7.1.4 2025-08-16T03:23:41,065 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.odlparent.bundles-diag/14.1.0 2025-08-16T03:23:41,067 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.ready-impl/7.1.4 2025-08-16T03:23:41,080 | INFO | features-3-thread-1 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | ThreadFactory created: SystemReadyService 2025-08-16T03:23:41,082 | INFO | features-3-thread-1 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | Now starting to provide full system readiness status updates (see TestBundleDiag's logs)... 2025-08-16T03:23:41,083 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-impl/7.1.4 2025-08-16T03:23:41,084 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos() started... 2025-08-16T03:23:41,131 | INFO | features-3-thread-1 | DiagStatusServiceImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service started 2025-08-16T03:23:41,137 | INFO | features-3-thread-1 | MBeanUtils | 198 - org.opendaylight.infrautils.diagstatus-api - 7.1.4 | MBean registration for org.opendaylight.infrautils.diagstatus:type=SvcStatus SUCCESSFUL. 2025-08-16T03:23:41,137 | INFO | features-3-thread-1 | DiagStatusServiceMBeanImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service management started 2025-08-16T03:23:41,138 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl/0.20.0 2025-08-16T03:23:41,142 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.tls-cipher-suite-algs/14.0.13 2025-08-16T03:23:41,143 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9642/14.0.13 2025-08-16T03:23:41,144 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-common/14.0.13 2025-08-16T03:23:41,144 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8528/14.0.13 2025-08-16T03:23:41,144 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8529/14.0.13 2025-08-16T03:23:41,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8639/14.0.13 2025-08-16T03:23:41,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-common-api/14.0.13 2025-08-16T03:23:41,145 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-dom-api/14.0.13 2025-08-16T03:23:41,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.general-entity/14.0.13 2025-08-16T03:23:41,146 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | wrap_file__tmp_karaf-0.23.0_system_net_java_dev_stax-utils_stax-utils_20070216_stax-utils-20070216.jar/0.0.0 2025-08-16T03:23:41,150 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-access-client/11.0.0 2025-08-16T03:23:41,166 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-mgmt-api/11.0.0 2025-08-16T03:23:41,168 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-common-util/11.0.0 2025-08-16T03:23:41,168 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-gson/14.0.14 2025-08-16T03:23:41,169 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-codec-xml/14.0.14 2025-08-16T03:23:41,169 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-tree-ri/14.0.14 2025-08-16T03:23:41,172 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-schema-osgi/14.0.13 2025-08-16T03:23:41,181 | INFO | features-3-thread-1 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | DOM Schema services activated 2025-08-16T03:23:41,182 | INFO | features-3-thread-1 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | Updating context to generation 1 2025-08-16T03:23:41,183 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-distributed-datastore/11.0.0 2025-08-16T03:23:41,204 | INFO | features-3-thread-1 | OSGiDatastoreContextIntrospectorFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore Context Introspector activated 2025-08-16T03:23:41,207 | INFO | features-3-thread-1 | FileModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Shard configuration provider started 2025-08-16T03:23:41,207 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.eos-dom-akka/11.0.0 2025-08-16T03:23:41,210 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9641/14.0.13 2025-08-16T03:23:41,212 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-server/14.0.13 2025-08-16T03:23:41,213 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-eos-binding-api/14.0.13 2025-08-16T03:23:41,214 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-singleton-api/14.0.13 2025-08-16T03:23:41,215 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.api/0.20.0 2025-08-16T03:23:41,215 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.validation.jakarta.validation-api/2.0.2 2025-08-16T03:23:41,216 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-compression/4.2.2.Final 2025-08-16T03:23:41,217 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-http/4.2.2.Final 2025-08-16T03:23:41,217 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.codec-http2/4.2.2.Final 2025-08-16T03:23:41,218 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.commons.commons-codec/1.15.0 2025-08-16T03:23:41,218 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-api/9.0.0 2025-08-16T03:23:41,219 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9643-ietf-tcp-server/14.0.13 2025-08-16T03:23:41,219 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-tcp/9.0.0 2025-08-16T03:23:41,220 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9645-ietf-tls-client/14.0.13 2025-08-16T03:23:41,220 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-tls/9.0.0 2025-08-16T03:23:41,220 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.crypt-hash/14.0.13 2025-08-16T03:23:41,221 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-http/9.0.0 2025-08-16T03:23:41,221 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.eos-binding-adapter/14.0.13 2025-08-16T03:23:41,223 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.device-ownership-service/0.20.0 2025-08-16T03:23:41,225 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.extension-api/0.20.0 2025-08-16T03:23:41,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-public-key-algs/14.0.13 2025-08-16T03:23:41,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-encryption-algs/14.0.13 2025-08-16T03:23:41,238 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-key-exchange-algs/14.0.13 2025-08-16T03:23:41,239 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.iana.ssh-mac-algs/14.0.13 2025-08-16T03:23:41,239 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-common/14.0.13 2025-08-16T03:23:41,239 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc7407-ietf-x509-cert-to-name/14.0.13 2025-08-16T03:23:41,239 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.draft-ietf-restconf-server/9.0.0 2025-08-16T03:23:41,240 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-client/2.47.0 2025-08-16T03:23:41,240 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-server/2.47.0 2025-08-16T03:23:41,241 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.truststore-none/9.0.0 2025-08-16T03:23:41,242 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jolokia.osgi/1.7.2 2025-08-16T03:23:41,245 | INFO | features-3-thread-1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.jolokia.osgi_1.7.2 [155]] 2025-08-16T03:23:41,262 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@37e719f9,contexts=[{HS,OCM-5,context:629325353,/}]} 2025-08-16T03:23:41,263 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@37e719f9,contexts=null}", size=3} 2025-08-16T03:23:41,263 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{HS,id=OCM-5,name='context:629325353',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:629325353',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2582be29}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@1d0954b9{/,null,STOPPED} 2025-08-16T03:23:41,265 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@1d0954b9{/,null,STOPPED} 2025-08-16T03:23:41,265 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@37e719f9,contexts=[{HS,OCM-5,context:629325353,/}]} 2025-08-16T03:23:41,270 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/" with default Osgi Context OsgiContextModel{HS,id=OCM-5,name='context:629325353',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:629325353',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2582be29}} 2025-08-16T03:23:41,289 | INFO | paxweb-config-3-thread-1 | osgi | 155 - org.jolokia.osgi - 1.7.2 | No access restrictor found, access to any MBean is allowed 2025-08-16T03:23:41,316 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@1d0954b9{/,null,AVAILABLE} 2025-08-16T03:23:41,316 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{HS,id=OCM-5,name='context:629325353',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:629325353',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2582be29}}} as OSGi service for "/" context path 2025-08-16T03:23:41,320 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.password-service-api/0.21.0 2025-08-16T03:23:41,322 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.cert/0.21.0 2025-08-16T03:23:41,366 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-08-16T03:23:41,368 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.databind/9.0.0 2025-08-16T03:23:41,369 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6243/14.0.13 2025-08-16T03:23:41,369 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-api/9.0.0 2025-08-16T03:23:41,370 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8072/14.0.13 2025-08-16T03:23:41,371 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-api/9.0.0 2025-08-16T03:23:41,371 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | javassist/3.30.2.GA 2025-08-16T03:23:41,372 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.model.ietf-topology/2013.10.21.26_13 2025-08-16T03:23:41,373 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8342-ietf-datastores/14.0.13 2025-08-16T03:23:41,374 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8525/14.0.13 2025-08-16T03:23:41,374 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc7952/14.0.13 2025-08-16T03:23:41,375 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8342-ietf-origin/14.0.13 2025-08-16T03:23:41,376 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8526/14.0.13 2025-08-16T03:23:41,377 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-model-export/14.0.14 2025-08-16T03:23:41,377 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8040-ietf-restconf-monitoring/14.0.13 2025-08-16T03:23:41,378 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc6470/14.0.13 2025-08-16T03:23:41,379 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.external.aopalliance-repackaged/2.6.1 2025-08-16T03:23:41,379 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.osgi-resource-locator/1.0.3 2025-08-16T03:23:41,436 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-binding-spi/14.0.13 2025-08-16T03:23:41,437 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.common/0.20.0 2025-08-16T03:23:41,438 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin/0.20.0 2025-08-16T03:23:41,439 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-singleton-impl/14.0.13 2025-08-16T03:23:41,441 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.impl/0.20.0 2025-08-16T03:23:41,469 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationServiceFactory)] 2025-08-16T03:23:41,472 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.impl/0.20.0. Missing service: [org.opendaylight.openflowplugin.api.openflow.statistics.ofpspecific.MessageIntelligenceAgency] 2025-08-16T03:23:41,478 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T03:23:41,483 | INFO | features-3-thread-1 | MessageIntelligenceAgencyImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Registered MBean org.opendaylight.openflowplugin.impl.statistics.ofpspecific:type=MessageIntelligenceAgencyMXBean 2025-08-16T03:23:41,483 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.impl/0.20.0 2025-08-16T03:23:41,484 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-08-16T03:23:41,490 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0. Missing service: [org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager] 2025-08-16T03:23:41,494 | INFO | features-3-thread-1 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | ReconciliationManager started 2025-08-16T03:23:41,495 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-08-16T03:23:41,495 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.arbitratorreconciliation-api/0.20.0 2025-08-16T03:23:41,497 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-impl/0.20.0 2025-08-16T03:23:41,507 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 2025-08-16T03:23:41,514 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:23:41,522 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:23:41,522 | INFO | features-3-thread-1 | OpenflowServiceRecoveryHandlerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Registering openflowplugin service recovery handlers 2025-08-16T03:23:41,523 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.sal-remote/9.0.0 2025-08-16T03:23:41,524 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.libraries.liblldp/0.20.0 2025-08-16T03:23:41,525 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 2025-08-16T03:23:41,530 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (objectClass=org.opendaylight.mdsal.binding.api.NotificationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T03:23:41,533 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 2025-08-16T03:23:41,538 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:23:41,541 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.dom-api/9.0.0 2025-08-16T03:23:41,541 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.model.rfc5277/9.0.0 2025-08-16T03:23:41,541 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.bulk-o-matic/0.20.0 2025-08-16T03:23:41,543 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.scala-reflect/2.13.16.v20250107-233423-VFINAL-3f6bdae 2025-08-16T03:23:41,544 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.netconf-common-mdsal/9.0.0 2025-08-16T03:23:41,544 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-spi/9.0.0 2025-08-16T03:23:41,545 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-mdsal-spi/9.0.0 2025-08-16T03:23:41,545 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc8650/14.0.13 2025-08-16T03:23:41,545 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.mdsal-dom-broker/14.0.13 2025-08-16T03:23:41,555 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for MountPointService activated 2025-08-16T03:23:41,563 | INFO | features-3-thread-1 | DOMRpcRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM RPC/Action router started 2025-08-16T03:23:41,566 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionProviderService activated 2025-08-16T03:23:41,570 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionService activated 2025-08-16T03:23:41,575 | INFO | features-3-thread-1 | DOMNotificationRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM Notification Router started 2025-08-16T03:23:41,577 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T03:23:41,577 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationService activated 2025-08-16T03:23:41,580 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T03:23:41,581 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationPublishService activated 2025-08-16T03:23:41,584 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T03:23:41,585 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:23:41,585 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcProviderService activated 2025-08-16T03:23:41,585 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:23:41,588 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:23:41,588 | INFO | features-3-thread-1 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcService activated 2025-08-16T03:23:41,588 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:23:41,588 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.api/0.21.0 2025-08-16T03:23:41,592 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.servlet-api/0.21.0 2025-08-16T03:23:41,598 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.yanglib-mdsal-writer/9.0.0 2025-08-16T03:23:41,601 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.osgi-impl/0.21.0 2025-08-16T03:23:41,604 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.containers.jersey-container-servlet-core/2.47.0 2025-08-16T03:23:41,604 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.web.servlet-jersey2/0.21.0 2025-08-16T03:23:41,609 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-jaxrs/9.0.0 2025-08-16T03:23:41,614 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-subscription/9.0.0 2025-08-16T03:23:41,618 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.sal-remote-impl/9.0.0 2025-08-16T03:23:41,621 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.odl-device-notification/9.0.0 2025-08-16T03:23:41,623 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server-mdsal/9.0.0 2025-08-16T03:23:41,628 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-server/9.0.0 2025-08-16T03:23:41,630 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-08-16T03:23:41,634 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-08-16T03:23:41,634 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.model.topology/0.20.0 2025-08-16T03:23:41,635 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.topology-manager/0.20.0 2025-08-16T03:23:41,639 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.jvm/4.2.32 2025-08-16T03:23:41,640 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-server/14.0.13 2025-08-16T03:23:41,641 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.cds-dom-api/11.0.0 2025-08-16T03:23:41,641 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-remoterpc-connector/11.0.0 2025-08-16T03:23:41,644 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding.model.ietf.rfc9644-ietf-ssh-client/14.0.13 2025-08-16T03:23:41,644 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-repo-fs/14.0.14 2025-08-16T03:23:41,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.raft-journal/11.0.0 2025-08-16T03:23:41,645 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.atomix-storage/11.0.0 2025-08-16T03:23:41,646 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.scala-lang.modules.scala-parser-combinators/1.1.2 2025-08-16T03:23:41,647 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.extension-onf/0.20.0 2025-08-16T03:23:41,649 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.jetty-auth-log-filter/0.21.0 2025-08-16T03:23:41,651 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl/0.20.0 2025-08-16T03:23:41,655 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.tokenauthrealm/0.21.0 2025-08-16T03:23:41,662 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.password-service-impl/0.21.0 2025-08-16T03:23:41,666 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.idm-store-h2/0.21.0 2025-08-16T03:23:41,668 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.apache.karaf.jdbc.core/4.4.7 2025-08-16T03:23:41,680 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jdbc.core/4.4.7 2025-08-16T03:23:41,681 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.healthchecks/4.2.32 2025-08-16T03:23:41,682 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.containers.jersey-container-servlet/2.47.0 2025-08-16T03:23:41,683 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.keystore-none/9.0.0 2025-08-16T03:23:41,683 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | com.googlecode.json-simple/1.1.1 2025-08-16T03:23:41,684 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.yangtools.yang-data-transform/14.0.14 2025-08-16T03:23:41,685 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.mdsal.binding-util/14.0.13 2025-08-16T03:23:41,685 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.restconf-nb/9.0.0 2025-08-16T03:23:41,689 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.media.jersey-media-sse/2.47.0 2025-08-16T03:23:41,690 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.applications.of-switch-config-pusher/0.20.0 2025-08-16T03:23:41,708 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-08-16T03:23:41,774 | INFO | features-3-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0. Missing service: [org.opendaylight.mdsal.binding.api.DataBroker, org.opendaylight.serviceutils.srm.spi.RegistryControl] 2025-08-16T03:23:41,775 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | karaf.branding/14.1.0 2025-08-16T03:23:41,779 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.jspecify.jspecify/1.0.0 2025-08-16T03:23:41,780 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.dropwizard.metrics.graphite/4.2.32 2025-08-16T03:23:41,781 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.transport-ssh/9.0.0 2025-08-16T03:23:41,782 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.openflowjava.blueprint-config/0.20.0 2025-08-16T03:23:41,788 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-akka-segmented-journal/11.0.0 2025-08-16T03:23:41,790 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.api/2.6.1 2025-08-16T03:23:41,793 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.locator/2.6.1 2025-08-16T03:23:41,795 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.inject.jersey-hk2/2.47.0 2025-08-16T03:23:41,796 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.hk2.utils/2.6.1 2025-08-16T03:23:41,797 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.filterchain/0.21.0 2025-08-16T03:23:41,804 | INFO | features-3-thread-1 | CustomFilterAdapterConfigurationImpl | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Custom filter properties updated: {service.pid=org.opendaylight.aaa.filterchain, osgi.ds.satisfying.condition.target=(osgi.condition.id=true), customFilterList=, component.name=org.opendaylight.aaa.filterchain.configuration.impl.CustomFilterAdapterConfigurationImpl, felix.fileinstall.filename=file:/tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg, component.id=120, Filter.target=(org.opendaylight.aaa.filterchain.filter=true)} 2025-08-16T03:23:41,805 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.shiro/0.21.0 2025-08-16T03:23:41,820 | INFO | features-3-thread-1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.IIDMStore), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T03:23:41,839 | INFO | features-3-thread-1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.aaa.shiro_0.21.0 [172]] 2025-08-16T03:23:41,840 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T03:23:41,840 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]}", size=1} 2025-08-16T03:23:41,840 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T03:23:41,842 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.activation-api/1.2.2 2025-08-16T03:23:41,844 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.glassfish.jersey.core.jersey-common/2.47.0 2025-08-16T03:23:41,845 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | jakarta.ws.rs-api/2.1.6 2025-08-16T03:23:41,845 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.aaa.authn-api/0.21.0 2025-08-16T03:23:41,846 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-clustering-commons/11.0.0 2025-08-16T03:23:41,853 | INFO | features-3-thread-1 | FileAkkaConfigurationReader | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | File-based Pekko configuration reader enabled 2025-08-16T03:23:41,853 | INFO | features-3-thread-1 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider starting 2025-08-16T03:23:42,034 | INFO | features-3-thread-1 | ActorSystemProviderImpl | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating new ActorSystem 2025-08-16T03:23:42,395 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Slf4jLogger | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Slf4jLogger started 2025-08-16T03:23:42,729 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ArteryTransport | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Remoting started with transport [Artery tcp]; listening on address [pekko://opendaylight-cluster-data@10.30.170.30:2550] with UID [-419904952892648527] 2025-08-16T03:23:42,741 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Starting up, Pekko version [1.0.3] ... 2025-08-16T03:23:42,793 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Registered cluster JMX MBean [pekko:type=Cluster] 2025-08-16T03:23:42,814 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Started up successfully 2025-08-16T03:23:42,946 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR started. Config: strategy [KeepMajority], stable-after [7 seconds], down-all-when-unstable [5250 milliseconds], selfUniqueAddress [pekko://opendaylight-cluster-data@10.30.170.30:2550#-419904952892648527], selfDc [default]. 2025-08-16T03:23:43,134 | INFO | features-3-thread-1 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider started 2025-08-16T03:23:43,137 | INFO | features-3-thread-1 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION starting 2025-08-16T03:23:43,271 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-1666093504]], but this node is not initialized yet 2025-08-16T03:23:43,391 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#2112592169]], but this node is not initialized yet 2025-08-16T03:23:43,391 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#2112592169]], but this node is not initialized yet 2025-08-16T03:23:43,391 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#2112592169]], but this node is not initialized yet 2025-08-16T03:23:43,392 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#2112592169]], but this node is not initialized yet 2025-08-16T03:23:43,525 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : config 2025-08-16T03:23:43,526 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T03:23:43,526 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T03:23:43,533 | INFO | features-3-thread-1 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-config 2025-08-16T03:23:43,606 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-config 2025-08-16T03:23:43,619 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-33 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage MAPPED 2025-08-16T03:23:43,683 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Recovery complete 2025-08-16T03:23:43,689 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RecoveringClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: saving tombstone ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0} 2025-08-16T03:23:43,759 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store config is using tell-based protocol 2025-08-16T03:23:43,764 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T03:23:43,764 | INFO | features-3-thread-1 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T03:23:43,765 | INFO | features-3-thread-1 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL starting 2025-08-16T03:23:43,766 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : operational 2025-08-16T03:23:43,766 | INFO | features-3-thread-1 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-operational 2025-08-16T03:23:43,785 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-config: Shard created, persistent : true 2025-08-16T03:23:43,785 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-config: Shard created, persistent : true 2025-08-16T03:23:43,785 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-operational 2025-08-16T03:23:43,786 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: Shard created, persistent : true 2025-08-16T03:23:43,790 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Recovery complete 2025-08-16T03:23:43,794 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-config: Shard created, persistent : true 2025-08-16T03:23:43,803 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RecoveringClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: saving tombstone ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0} 2025-08-16T03:23:43,808 | INFO | features-3-thread-1 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store operational is using tell-based protocol 2025-08-16T03:23:43,812 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:23:43,813 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:23:43,821 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-operational: Shard created, persistent : false 2025-08-16T03:23:43,821 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Shard created, persistent : false 2025-08-16T03:23:43,823 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config/member-3-shard-toaster-config-notifier#929022397 created and ready for shard:member-3-shard-toaster-config 2025-08-16T03:23:43,824 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational/member-3-shard-default-operational-notifier#972457837 created and ready for shard:member-3-shard-default-operational 2025-08-16T03:23:43,825 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational/member-3-shard-topology-operational-notifier#-891692970 created and ready for shard:member-3-shard-topology-operational 2025-08-16T03:23:43,826 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config/member-3-shard-inventory-config-notifier#-284910539 created and ready for shard:member-3-shard-inventory-config 2025-08-16T03:23:43,827 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config/member-3-shard-default-config-notifier#1858555581 created and ready for shard:member-3-shard-default-config 2025-08-16T03:23:43,828 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config/member-3-shard-topology-config-notifier#1037043200 created and ready for shard:member-3-shard-topology-config 2025-08-16T03:23:43,831 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Starting recovery with journal batch size 1 2025-08-16T03:23:43,831 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Starting recovery with journal batch size 1 2025-08-16T03:23:43,834 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Starting recovery with journal batch size 1 2025-08-16T03:23:43,837 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Starting recovery with journal batch size 1 2025-08-16T03:23:43,841 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Starting recovery with journal batch size 1 2025-08-16T03:23:43,843 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Starting recovery with journal batch size 1 2025-08-16T03:23:43,846 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-operational: Shard created, persistent : false 2025-08-16T03:23:43,847 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Starting recovery with journal batch size 1 2025-08-16T03:23:43,848 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational/member-3-shard-inventory-operational-notifier#-1114063814 created and ready for shard:member-3-shard-inventory-operational 2025-08-16T03:23:43,848 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-operational: Shard created, persistent : false 2025-08-16T03:23:43,849 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Starting recovery with journal batch size 1 2025-08-16T03:23:43,850 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational/member-3-shard-toaster-operational-notifier#-1764785521 created and ready for shard:member-3-shard-toaster-operational 2025-08-16T03:23:43,872 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-46 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage DISK 2025-08-16T03:23:43,886 | INFO | features-3-thread-1 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service starting 2025-08-16T03:23:43,890 | INFO | features-3-thread-1 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service started 2025-08-16T03:23:43,890 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-akka-raft/11.0.0 2025-08-16T03:23:43,893 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | io.netty.common/4.2.2.Final 2025-08-16T03:23:43,895 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.openflowplugin.blueprint-config/0.20.0 2025-08-16T03:23:43,910 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.controller.sal-cluster-admin-impl/11.0.0 2025-08-16T03:23:43,922 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.keystore-api/9.0.0 2025-08-16T03:23:43,928 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | org.opendaylight.netconf.truststore-api/9.0.0 2025-08-16T03:23:43,931 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: journal open: applyTo=0 2025-08-16T03:23:43,931 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: journal open: applyTo=0 2025-08-16T03:23:43,932 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: journal open: applyTo=0 2025-08-16T03:23:43,932 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: journal open: applyTo=0 2025-08-16T03:23:43,933 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: journal open: applyTo=0 2025-08-16T03:23:43,934 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: journal open: applyTo=0 2025-08-16T03:23:43,934 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: journal open: applyTo=0 2025-08-16T03:23:43,935 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: journal open: applyTo=0 2025-08-16T03:23:43,967 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,969 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,969 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,970 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,971 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,971 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,971 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,974 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T03:23:43,976 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,980 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,980 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,980 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,981 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,981 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,982 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,983 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Local TermInfo store seeded with TermInfo{term=0} 2025-08-16T03:23:43,986 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from null to Follower 2025-08-16T03:23:43,986 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , received role change from null to Follower 2025-08-16T03:23:43,987 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from null to Follower 2025-08-16T03:23:43,987 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from null to Follower 2025-08-16T03:23:43,990 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from null to Follower 2025-08-16T03:23:43,990 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from null to Follower 2025-08-16T03:23:43,991 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , received role change from null to Follower 2025-08-16T03:23:43,994 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from null to Follower 2025-08-16T03:23:43,995 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T03:23:43,996 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from null to Follower 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-toaster-operational from null to Follower 2025-08-16T03:23:43,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from null to Follower 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from null to Follower 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from null to Follower 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from null to Follower 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from null to Follower 2025-08-16T03:23:43,998 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-topology-config from null to Follower 2025-08-16T03:23:44,057 | INFO | features-3-thread-1 | Activator | 99 - org.apache.karaf.deployer.features - 4.4.7 | Deployment finished. Registering FeatureDeploymentListener 2025-08-16T03:23:44,472 | INFO | features-3-thread-1 | FeaturesServiceImpl | 18 - org.apache.karaf.features.core - 4.4.7 | Done. 2025-08-16T03:23:48,888 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | CandidateRegistryInit | 185 - org.opendaylight.controller.eos-dom-akka - 11.0.0 | member-3 : Initial removal of candidates from previous iteration failed. Rescheduling. java.util.concurrent.TimeoutException: Ask timed out on [Actor[pekko://opendaylight-cluster-data/system/singletonProxyOwnerSupervisor-no-dc#1267004491]] after [5000 ms]. Message of type [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. at org.apache.pekko.actor.typed.scaladsl.AskPattern$.$anonfun$onTimeout$1(AskPattern.scala:141) ~[bundleFile:?] at org.apache.pekko.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:737) ~[bundleFile:?] at org.apache.pekko.actor.Scheduler$$anon$7.run(Scheduler.scala:491) ~[bundleFile:?] at scala.concurrent.ExecutionContext$parasitic$.execute(ExecutionContext.scala:222) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:384) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:332) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:336) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:288) ~[bundleFile:?] at java.lang.Thread.run(Thread.java:1583) ~[?:?] 2025-08-16T03:23:53,914 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | CandidateRegistryInit | 185 - org.opendaylight.controller.eos-dom-akka - 11.0.0 | member-3 : Initial removal of candidates from previous iteration failed. Rescheduling. java.util.concurrent.TimeoutException: Ask timed out on [Actor[pekko://opendaylight-cluster-data/system/singletonProxyOwnerSupervisor-no-dc#1267004491]] after [5000 ms]. Message of type [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. at org.apache.pekko.actor.typed.scaladsl.AskPattern$.$anonfun$onTimeout$1(AskPattern.scala:141) ~[bundleFile:?] at org.apache.pekko.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:737) ~[bundleFile:?] at org.apache.pekko.actor.Scheduler$$anon$7.run(Scheduler.scala:491) ~[bundleFile:?] at scala.concurrent.ExecutionContext$parasitic$.execute(ExecutionContext.scala:222) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:384) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:332) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:336) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:288) ~[bundleFile:?] at java.lang.Thread.run(Thread.java:1583) ~[?:?] 2025-08-16T03:23:54,027 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Candidate): Starting new election term 1 2025-08-16T03:23:54,028 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,028 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , received role change from Follower to Candidate 2025-08-16T03:23:54,029 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-toaster-operational from Follower to Candidate 2025-08-16T03:23:54,039 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate): Starting new election term 1 2025-08-16T03:23:54,039 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,039 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from Follower to Candidate 2025-08-16T03:23:54,040 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from Follower to Candidate 2025-08-16T03:23:54,040 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Candidate): Starting new election term 1 2025-08-16T03:23:54,040 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,040 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from Follower to Candidate 2025-08-16T03:23:54,040 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from Follower to Candidate 2025-08-16T03:23:54,080 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Candidate): Starting new election term 1 2025-08-16T03:23:54,081 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,081 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Candidate): Starting new election term 1 2025-08-16T03:23:54,081 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,081 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from Follower to Candidate 2025-08-16T03:23:54,081 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from Follower to Candidate 2025-08-16T03:23:54,082 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from Follower to Candidate 2025-08-16T03:23:54,082 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from Follower to Candidate 2025-08-16T03:23:54,089 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Candidate): Starting new election term 1 2025-08-16T03:23:54,090 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,090 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , received role change from Follower to Candidate 2025-08-16T03:23:54,090 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-topology-config from Follower to Candidate 2025-08-16T03:23:54,099 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate): Starting new election term 1 2025-08-16T03:23:54,100 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,100 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate): Starting new election term 1 2025-08-16T03:23:54,100 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Follower) :- Switching from behavior Follower to Candidate, election term: 1 2025-08-16T03:23:54,100 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from Follower to Candidate 2025-08-16T03:23:54,101 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from Follower to Candidate 2025-08-16T03:23:54,101 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from Follower to Candidate 2025-08-16T03:23:54,101 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from Follower to Candidate 2025-08-16T03:23:55,296 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-1666093504]], but this node is not initialized yet 2025-08-16T03:23:55,454 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoinAck message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon#1588188794]] to [pekko://opendaylight-cluster-data@10.30.170.30:2550] 2025-08-16T03:23:55,503 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Welcome from [pekko://opendaylight-cluster-data@10.30.170.62:2550] 2025-08-16T03:23:55,512 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T03:23:55,512 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T03:23:55,512 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T03:23:55,513 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T03:23:55,513 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T03:23:55,513 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T03:23:55,513 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T03:23:55,514 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-1-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T03:23:55,514 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-1-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T03:23:55,514 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-1-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T03:23:55,514 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-1-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T03:23:55,512 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T03:23:55,514 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-1-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T03:23:55,516 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T03:23:55,516 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-1-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T03:23:55,516 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-1-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T03:23:55,516 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T03:23:55,517 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-1-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T03:23:55,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#1891152831] was unhandled. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:55,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#1891152831] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:55,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-171826102] was unhandled. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:55,519 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#-171826102] was unhandled. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:55,548 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data@10.30.170.62:2550/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-08-16T03:23:55,587 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse] to Actor[pekko://opendaylight-cluster-data/temp/$a] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/temp/$a] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:55,587 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse] to Actor[pekko://opendaylight-cluster-data/temp/$b] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/temp/$b] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T03:23:56,188 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T03:23:56,189 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T03:23:56,189 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T03:23:56,189 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T03:23:56,189 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T03:23:56,189 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T03:23:56,190 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T03:23:56,190 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T03:23:56,190 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T03:23:56,190 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T03:23:56,190 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-08-16T03:23:56,191 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-08-16T03:23:56,192 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T03:23:56,192 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T03:23:56,192 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T03:23:56,192 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T03:23:56,193 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T03:23:56,193 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T03:23:56,193 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T03:23:56,194 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config 2025-08-16T03:23:56,194 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config 2025-08-16T03:23:56,194 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-08-16T03:23:56,194 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-08-16T03:23:56,199 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | ClusterSingletonManager state change [Start -> Younger] 2025-08-16T03:24:00,738 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-topology-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,738 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-topology-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,740 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-default-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,740 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-default-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,741 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-toaster-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,747 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-inventory-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,755 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,756 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , received role change from Candidate to Follower 2025-08-16T03:24:00,756 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-topology-config from Candidate to Follower 2025-08-16T03:24:00,758 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,758 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,758 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,758 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from Candidate to Follower 2025-08-16T03:24:00,758 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , received role change from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-toaster-operational from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from Candidate to Follower 2025-08-16T03:24:00,759 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from Candidate to Follower 2025-08-16T03:24:00,765 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,765 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from Candidate to Follower 2025-08-16T03:24:00,765 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from Candidate to Follower 2025-08-16T03:24:00,786 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6cf0ffbe 2025-08-16T03:24:00,787 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done false 2025-08-16T03:24:00,788 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@2a6cce3f 2025-08-16T03:24:00,790 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done false 2025-08-16T03:24:00,790 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-toaster-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6c849ef9 2025-08-16T03:24:00,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@37821a5c 2025-08-16T03:24:00,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done false 2025-08-16T03:24:00,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@76ddfe7d 2025-08-16T03:24:00,792 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-08-16T03:24:00,792 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done false 2025-08-16T03:24:00,793 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done false 2025-08-16T03:24:00,793 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7fdea74f 2025-08-16T03:24:00,793 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done false 2025-08-16T03:24:00,796 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type OPERATIONAL activated 2025-08-16T03:24:00,796 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL started 2025-08-16T03:24:00,811 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,812 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from Candidate to Follower 2025-08-16T03:24:00,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from Candidate to Follower 2025-08-16T03:24:00,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done false 2025-08-16T03:24:00,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@c1f17be 2025-08-16T03:24:00,816 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate): Term 2 in "RequestVote{term=2, candidateId=member-2-shard-inventory-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than Candidate's term 1 - switching to Follower 2025-08-16T03:24:00,852 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate) :- Switching from behavior Candidate to Follower, election term: 2 2025-08-16T03:24:00,853 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from Candidate to Follower 2025-08-16T03:24:00,853 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done false 2025-08-16T03:24:00,853 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from Candidate to Follower 2025-08-16T03:24:00,854 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7a800b58 2025-08-16T03:24:00,854 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-08-16T03:24:00,875 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ConcurrentDOMDataBroker | 358 - org.opendaylight.yangtools.util - 14.0.14 | ThreadFactory created: CommitFutures 2025-08-16T03:24:00,878 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | DataBrokerCommitExecutor | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker commit exector started 2025-08-16T03:24:00,879 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type CONFIGURATION activated 2025-08-16T03:24:00,894 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ConcurrentDOMDataBroker | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker started 2025-08-16T03:24:00,908 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for DataBroker activated 2025-08-16T03:24:00,943 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T03:24:01,021 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config#-1643363924], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T03:24:01,022 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config#-1643363924], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T03:24:01,051 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config#-1643363924], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 28.99 ms 2025-08-16T03:24:01,099 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [Initial app config ForwardingRulesManagerConfig, (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:24:01,121 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-08-16T03:24:01,124 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), Initial app config TopologyLldpDiscoveryConfig] 2025-08-16T03:24:01,125 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:24:01,139 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done true 2025-08-16T03:24:01,158 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [Initial app config LldpSpeakerConfig] 2025-08-16T03:24:01,159 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T03:24:01,191 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading properties from '(urn:opendaylight:params:xml:ns:yang:openflow:provider:config?revision=2016-05-10)openflow-provider-config' YANG file 2025-08-16T03:24:01,195 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | rpc-requests-quota configuration property was changed to '20000' 2025-08-16T03:24:01,195 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | global-notification-quota configuration property was changed to '64000' 2025-08-16T03:24:01,196 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | switch-features-mandatory configuration property was changed to 'false' 2025-08-16T03:24:01,196 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | enable-flow-removed-notification configuration property was changed to 'true' 2025-08-16T03:24:01,196 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-rpc-enabled configuration property was changed to 'false' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-count-limit configuration property was changed to '25600' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-interval-timeout-limit configuration property was changed to '500' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | echo-reply-timeout configuration property was changed to '2000' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-table-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-flow-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-group-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-meter-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-port-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-queue-statistics-polling-on configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | skip-table-features configuration property was changed to 'true' 2025-08-16T03:24:01,197 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | basic-timer-delay configuration property was changed to '3000' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | maximum-timer-delay configuration property was changed to '900000' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | use-single-layer-serialization configuration property was changed to 'true' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-min-threads configuration property was changed to '1' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-max-threads configuration property was changed to '32000' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-timeout configuration property was changed to '60' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-rate-limit-per-min configuration property was changed to '0' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-hold-time-in-seconds configuration property was changed to '0' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-datastore-removal-delay configuration property was changed to '500' 2025-08-16T03:24:01,198 | INFO | Blueprint Extender: 3 | OSGiConfigurationServiceFactory | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading configuration from 'org.opendaylight.openflowplugin' configuration file 2025-08-16T03:24:01,223 | INFO | CommitFutures-0 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Configuration update succeeded 2025-08-16T03:24:01,236 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | felix.fileinstall.filename configuration property was changed to 'file:/tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg' 2025-08-16T03:24:01,236 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | service.pid configuration property was changed to 'org.opendaylight.openflowplugin' 2025-08-16T03:24:01,252 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done true 2025-08-16T03:24:01,255 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 has been started 2025-08-16T03:24:01,260 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.impl_0.20.0 [309] was successfully created 2025-08-16T03:24:01,263 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | EOSClusterSingletonServiceProvider | 257 - org.opendaylight.mdsal.mdsal-singleton-impl - 14.0.13 | Cluster Singleton Service started 2025-08-16T03:24:01,310 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer registered 2025-08-16T03:24:01,317 | INFO | Blueprint Extender: 2 | LLDPSpeaker | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | LLDPSpeaker started, it will send LLDP frames each 5 seconds 2025-08-16T03:24:01,320 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done true 2025-08-16T03:24:01,353 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.frm.impl.ForwardingRulesManagerImpl@1b5b98ee was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T03:24:01,381 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.topology.lldp.LLDPLinkAger@4976fa81 was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T03:24:01,393 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | FlowCapableTopologyProvider | 304 - org.opendaylight.openflowplugin.applications.topology-manager - 0.20.0 | Topology Manager service started. 2025-08-16T03:24:01,468 | INFO | Blueprint Extender: 3 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | Starting LLDPActivator with lldpSecureKey: aa9251f8-c7c0-4322-b8d6-c3a84593bda3 2025-08-16T03:24:01,469 | INFO | Blueprint Extender: 3 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | LLDPDiscoveryListener started. 2025-08-16T03:24:01,469 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 has been started 2025-08-16T03:24:01,473 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery_0.20.0 [303] was successfully created 2025-08-16T03:24:01,518 | INFO | Blueprint Extender: 2 | NodeConnectorInventoryEventTranslator | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | NodeConnectorInventoryEventTranslator has started. 2025-08-16T03:24:01,525 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 has been started 2025-08-16T03:24:01,526 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.lldp-speaker_0.20.0 [300] was successfully created 2025-08-16T03:24:01,530 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | ArbitratorReconciliationManagerImpl | 296 - org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl - 0.20.0 | ArbitratorReconciliationManager has started successfully. 2025-08-16T03:24:01,532 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational#824283698], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T03:24:01,532 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational#824283698], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T03:24:01,533 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational#824283698], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 1.265 ms 2025-08-16T03:24:01,574 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiPasswordServiceConfigBootstrap | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | Listening for password service configuration 2025-08-16T03:24:01,577 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T03:24:01,582 | INFO | Blueprint Extender: 1 | ForwardingRulesManagerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | ForwardingRulesManager has started successfully. 2025-08-16T03:24:01,580 | ERROR | opendaylight-cluster-data-notification-dispatcher-49 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | bundle org.opendaylight.aaa.idm-store-h2:0.21.0 (167)[org.opendaylight.aaa.datastore.h2.H2Store(115)] : Constructor argument 0 in class class org.opendaylight.aaa.datastore.h2.H2Store has unsupported type org.opendaylight.aaa.datastore.h2.ConnectionProvider 2025-08-16T03:24:01,589 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default iteration count=20000 2025-08-16T03:24:01,590 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 has been started 2025-08-16T03:24:01,591 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default algorithm=SHA-512 2025-08-16T03:24:01,591 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will not utilize a private salt, since none was configured 2025-08-16T03:24:01,592 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager_0.20.0 [299] was successfully created 2025-08-16T03:24:01,597 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | DeviceOwnershipService started 2025-08-16T03:24:01,599 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | DefaultConfigPusher | 301 - org.opendaylight.openflowplugin.applications.of-switch-config-pusher - 0.20.0 | DefaultConfigPusher has started. 2025-08-16T03:24:01,600 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-08-16T03:24:01,632 | INFO | opendaylight-cluster-data-notification-dispatcher-49 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | H2 IDMStore activated 2025-08-16T03:24:01,634 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T03:24:01,644 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | MD-SAL configuration-based SwitchConnectionProviders started 2025-08-16T03:24:01,645 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:24:01,652 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | AAAEncryptionServiceImpl | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | AAAEncryptionService activated 2025-08-16T03:24:01,654 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Encryption Service enabled 2025-08-16T03:24:01,656 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-default-impl' 2025-08-16T03:24:01,665 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration, (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T03:24:01,676 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config ShiroConfiguration] 2025-08-16T03:24:01,684 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-08-16T03:24:01,698 | INFO | Blueprint Extender: 3 | LazyBindingList | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for lists larger than 16 element(s) 2025-08-16T03:24:01,731 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] 2025-08-16T03:24:01,735 | INFO | Blueprint Extender: 3 | AaaCertMdsalProvider | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCertMdsalProvider Initialized 2025-08-16T03:24:01,736 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] 2025-08-16T03:24:01,742 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] already present 2025-08-16T03:24:01,746 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] already present 2025-08-16T03:24:01,774 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiClusterAdmin | 193 - org.opendaylight.controller.sal-cluster-admin-impl - 11.0.0 | Cluster Admin services started 2025-08-16T03:24:01,777 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION started 2025-08-16T03:24:01,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done true 2025-08-16T03:24:01,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done true 2025-08-16T03:24:01,813 | INFO | opendaylight-cluster-data-shard-dispatcher-32 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done true 2025-08-16T03:24:01,813 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done true 2025-08-16T03:24:01,869 | INFO | opendaylight-cluster-data-shard-dispatcher-31 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done true 2025-08-16T03:24:01,963 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#291433987], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T03:24:01,963 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#291433987], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T03:24:01,965 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=0}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#291433987], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 1.477 ms 2025-08-16T03:24:02,002 | INFO | Blueprint Extender: 3 | ODLKeyTool | 163 - org.opendaylight.aaa.cert - 0.21.0 | ctl.jks is created 2025-08-16T03:24:02,028 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | OpenFlowPluginProvider started, waiting for onSystemBootReady() 2025-08-16T03:24:02,029 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@e509762 2025-08-16T03:24:02,040 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OnfExtensionProvider | 308 - org.opendaylight.openflowplugin.extension-onf - 0.20.0 | ONF Extension Provider started. 2025-08-16T03:24:02,040 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-legacy-impl' 2025-08-16T03:24:02,043 | INFO | opendaylight-cluster-data-notification-dispatcher-54 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@5ac2b8f7 2025-08-16T03:24:02,067 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:24:02,068 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | Certificate Manager service has been initialized 2025-08-16T03:24:02,068 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:24:02,071 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCert Rpc Service has been initialized 2025-08-16T03:24:02,085 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 has been started 2025-08-16T03:24:02,086 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.cert_0.21.0 [163] was successfully created 2025-08-16T03:24:02,097 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Checking if default entries must be created in IDM store 2025-08-16T03:24:02,230 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_DOMAINS does not exist, creating it 2025-08-16T03:24:02,308 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created default domain 2025-08-16T03:24:02,314 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_ROLES does not exist, creating it 2025-08-16T03:24:02,360 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created 'admin' role 2025-08-16T03:24:02,380 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Created 'user' role 2025-08-16T03:24:02,522 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_USERS does not exist, creating it 2025-08-16T03:24:02,589 | INFO | Blueprint Extender: 1 | AbstractStore | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | Table AAA_GRANTS does not exist, creating it 2025-08-16T03:24:02,651 | INFO | Blueprint Extender: 1 | AAAShiroProvider | 172 - org.opendaylight.aaa.shiro - 0.21.0 | AAAShiroProvider Session Initiated 2025-08-16T03:24:02,800 | INFO | Blueprint Extender: 1 | IniSecurityManagerFactory | 171 - org.opendaylight.aaa.repackaged-shiro - 0.21.0 | Realms have been explicitly set on the SecurityManager instance - auto-setting of realms will not occur. 2025-08-16T03:24:02,828 | ERROR | Blueprint Extender: 1 | MdsalRestconfServer | 279 - org.opendaylight.netconf.restconf-server-mdsal - 9.0.0 | bundle org.opendaylight.netconf.restconf-server-mdsal:9.0.0 (279)[org.opendaylight.restconf.server.mdsal.MdsalRestconfServer(99)] : Constructor argument 5 in class class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer has unsupported type [Lorg.opendaylight.restconf.server.spi.RpcImplementation; 2025-08-16T03:24:02,913 | INFO | Blueprint Extender: 1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278]] 2025-08-16T03:24:02,914 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-11,contextPath='/rests'} 2025-08-16T03:24:02,915 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=310, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}", size=2} 2025-08-16T03:24:02,915 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-11,contextPath='/rests'} 2025-08-16T03:24:02,915 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=310, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@581a3e59{/rests,null,STOPPED} 2025-08-16T03:24:02,917 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@581a3e59{/rests,null,STOPPED} 2025-08-16T03:24:02,924 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-08-16T03:24:02,925 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=2} 2025-08-16T03:24:02,926 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T03:24:02,927 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/rests" with default Osgi Context OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=310, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} 2025-08-16T03:24:02,929 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T03:24:02,930 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T03:24:02,930 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@581a3e59{/rests,null,AVAILABLE} 2025-08-16T03:24:02,930 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-9,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=310, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}} as OSGi service for "/rests" context path 2025-08-16T03:24:02,931 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T03:24:02,931 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-08-16T03:24:02,931 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=2} 2025-08-16T03:24:02,931 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T03:24:02,932 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T03:24:02,932 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-08-16T03:24:02,932 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]}", size=1} 2025-08-16T03:24:02,932 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RESTCONF,/rests}]} 2025-08-16T03:24:02,932 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /rests with 4 service(s) 2025-08-16T03:24:02,935 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-18,contextPath='/.well-known'} 2025-08-16T03:24:02,935 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}", size=2} 2025-08-16T03:24:02,935 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-18,contextPath='/.well-known'} 2025-08-16T03:24:02,935 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@3e773c0c{/.well-known,null,STOPPED} 2025-08-16T03:24:02,937 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@3e773c0c{/.well-known,null,STOPPED} 2025-08-16T03:24:02,937 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-08-16T03:24:02,937 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]}", size=2} 2025-08-16T03:24:02,937 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T03:24:02,938 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-08-16T03:24:02,938 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/.well-known" with default Osgi Context OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} 2025-08-16T03:24:02,938 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /.well-known with 3 service(s) 2025-08-16T03:24:02,938 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@3e773c0c{/.well-known,null,AVAILABLE} 2025-08-16T03:24:02,938 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-16,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}} as OSGi service for "/.well-known" context path 2025-08-16T03:24:02,939 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T03:24:02,939 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-20,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-08-16T03:24:02,939 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-20,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]}", size=1} 2025-08-16T03:24:02,939 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-20,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-16,WellKnownURIs,/.well-known}]} 2025-08-16T03:24:02,942 | INFO | Blueprint Extender: 1 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | Binding URL provider org.opendaylight.restconf.server.jaxrs.JaxRsYangLibrary@4054c19c 2025-08-16T03:24:02,974 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 2025-08-16T03:24:02,974 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 2025-08-16T03:24:02,975 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 2025-08-16T03:24:02,975 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 2025-08-16T03:24:03,002 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with service class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer 2025-08-16T03:24:03,002 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with base path: /restconf, default encoding: JSON, default pretty print: false 2025-08-16T03:24:03,039 | INFO | Blueprint Extender: 1 | OSGiNorthbound | 275 - org.opendaylight.netconf.restconf-nb - 9.0.0 | Global RESTCONF northbound pools started 2025-08-16T03:24:03,050 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-24,contextPath='/auth'} 2025-08-16T03:24:03,051 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=320, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}", size=2} 2025-08-16T03:24:03,051 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-24,contextPath='/auth'} 2025-08-16T03:24:03,052 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.aaa.shiro_0.21.0 [172] registered context path /auth with 4 service(s) 2025-08-16T03:24:03,052 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=320, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@78897e8e{/auth,null,STOPPED} 2025-08-16T03:24:03,053 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 has been started 2025-08-16T03:24:03,054 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@78897e8e{/auth,null,STOPPED} 2025-08-16T03:24:03,054 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-08-16T03:24:03,055 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=2} 2025-08-16T03:24:03,055 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T03:24:03,055 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/auth" with default Osgi Context OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=320, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} 2025-08-16T03:24:03,056 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T03:24:03,057 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.shiro_0.21.0 [172] was successfully created 2025-08-16T03:24:03,060 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T03:24:03,060 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@78897e8e{/auth,null,AVAILABLE} 2025-08-16T03:24:03,060 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-22,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=320, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}} as OSGi service for "/auth" context path 2025-08-16T03:24:03,061 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T03:24:03,061 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-08-16T03:24:03,061 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T03:24:03,061 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=2} 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]}", size=1} 2025-08-16T03:24:03,062 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-27,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-22,RealmManagement,/auth}]} 2025-08-16T03:24:03,588 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos: Elapsed time 22s, remaining time 277s, diag: Active {INSTALLED=0, RESOLVED=10, UNKNOWN=0, GRACE_PERIOD=0, WAITING=0, STARTING=0, ACTIVE=399, STOPPING=0, FAILURE=0} 2025-08-16T03:24:03,589 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] 2025-08-16T03:24:03,589 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | Now notifying all its registered SystemReadyListeners... 2025-08-16T03:24:03,589 | INFO | SystemReadyService-0 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | onSystemBootReady() received, starting the switch connections 2025-08-16T03:24:03,697 | INFO | epollEventLoopGroup-4-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T03:24:03,697 | INFO | epollEventLoopGroup-2-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T03:24:03,698 | INFO | epollEventLoopGroup-4-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T03:24:03,698 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@5ac2b8f7 started 2025-08-16T03:24:03,698 | INFO | epollEventLoopGroup-2-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T03:24:03,698 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@e509762 started 2025-08-16T03:24:03,699 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | All switchConnectionProviders are up and running (2). 2025-08-16T03:24:27,155 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2064 millis 2025-08-16T03:24:27,156 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.196:2550: 2067 millis 2025-08-16T03:25:32,222 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2745 millis 2025-08-16T03:25:32,224 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.196:2550: 3192 millis 2025-08-16T03:26:49,171 | INFO | sshd-SshServer[3030c5c9](port=8101)-nio2-thread-1 | OpenSSHKeyPairProvider | 121 - org.apache.karaf.shell.ssh - 4.4.7 | Creating ssh server private key at /tmp/karaf-0.23.0/etc/host.key 2025-08-16T03:26:49,173 | INFO | sshd-SshServer[3030c5c9](port=8101)-nio2-thread-1 | OpenSSHKeyPairGenerator | 121 - org.apache.karaf.shell.ssh - 4.4.7 | generateKeyPair(RSA) generating host key - size=2048 2025-08-16T03:26:49,534 | INFO | sshd-SshServer[3030c5c9](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:49456 authenticated 2025-08-16T03:26:50,733 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/010__Cluster_Reconcilliation_Multi_DPN.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/010__Cluster_Reconcilliation_Multi_DPN.robot 2025-08-16T03:26:51,289 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Check Shards Status And Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Check Shards Status And Initialize Variables 2025-08-16T03:26:54,060 | INFO | qtp415579349-617 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T03:26:54,064 | INFO | qtp415579349-617 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T03:26:54,625 | INFO | qtp415579349-617 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication is now enabled 2025-08-16T03:26:54,626 | INFO | qtp415579349-617 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication Manager activated 2025-08-16T03:26:54,719 | INFO | qtp415579349-617 | ApiPathParser | 273 - org.opendaylight.netconf.restconf-api - 9.0.0 | Consecutive slashes in REST URLs will be rejected 2025-08-16T03:26:58,912 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Inventory Follower and Leader Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Inventory Follower and Leader Before Cluster Restart 2025-08-16T03:26:59,966 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node1 2025-08-16T03:27:03,187 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Add Bulk Flow From Follower 2025-08-16T03:27:03,529 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:27:03,576 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:27:03,605 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:27:03,644 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:27:04,012 | INFO | opendaylight-cluster-data-notification-dispatcher-53 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T03:27:04,112 | WARN | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Seems like device is still owned by other controller instance. Skip deleting openflow:1 node from operational datastore. 2025-08-16T03:27:04,738 | WARN | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.lang.UnsupportedOperationException: null at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] 2025-08-16T03:27:04,750 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | null 2025-08-16T03:27:08,136 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 3850 millis 2025-08-16T03:27:08,138 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.196:2550: 3852 millis 2025-08-16T03:28:44,258 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Bulk Flows and Verify In Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Get Bulk Flows and Verify In Inventory Leader 2025-08-16T03:28:44,579 | WARN | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-08-16T03:28:44,581 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-08-16T03:28:45,097 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-08-16T03:28:45,098 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-08-16T03:30:24,950 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Before Cluster Restart 2025-08-16T03:30:33,590 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 4161 millis 2025-08-16T03:30:33,593 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.196:2550: 4201 millis 2025-08-16T03:32:05,417 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 and Exit" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 and Exit 2025-08-16T03:32:05,675 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:32:05,675 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:32:06,187 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T03:32:08,077 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Reconnect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Reconnect To Follower Node1 2025-08-16T03:32:10,902 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Reconnected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Reconnected To Follower Node1 2025-08-16T03:32:11,035 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-43 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:32:11,154 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:33:53,208 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node1 2025-08-16T03:33:53,505 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:33:53,505 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:33:54,012 | INFO | node-cleaner-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T03:33:55,896 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Follower Node2 2025-08-16T03:33:58,251 | INFO | epollEventLoopGroup-5-1 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.170.99:51132, NodeId:null 2025-08-16T03:33:58,309 | INFO | epollEventLoopGroup-5-2 | ConnectionAdapterImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Hello received 2025-08-16T03:33:58,670 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Follower Node2 2025-08-16T03:33:58,705 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connected. 2025-08-16T03:33:58,705 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | No context chain found for device: openflow:1, creating new. 2025-08-16T03:33:58,705 | INFO | epollEventLoopGroup-5-2 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Device connected to controller, Device:/10.30.170.99:51140, NodeId:Uri{value=openflow:1} 2025-08-16T03:33:58,727 | INFO | epollEventLoopGroup-5-2 | RoleContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started timer for setting SLAVE role on device openflow:1 if no role will be set in 20s. 2025-08-16T03:33:58,944 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-45 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T03:33:59,065 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-47 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T03:33:59,065 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting DeviceContextImpl[NEW] service for node openflow:1 2025-08-16T03:33:59,077 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RpcContextImpl[NEW] service for node openflow:1 2025-08-16T03:33:59,097 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting StatisticsContextImpl[NEW] service for node openflow:1 2025-08-16T03:33:59,097 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RoleContextImpl[NEW] service for node openflow:1 2025-08-16T03:33:59,099 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | SetRole called with input:SetRoleInput{controllerRole=BECOMEMASTER, node=NodeRef{value=DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] ]}} 2025-08-16T03:33:59,099 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Requesting state change to BECOMEMASTER 2025-08-16T03:33:59,099 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | RoleChangeTask called on device:openflow:1 OFPRole:BECOMEMASTER 2025-08-16T03:33:59,099 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | getGenerationIdFromDevice called for device: openflow:1 2025-08-16T03:33:59,104 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange called for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T03:33:59,106 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange onSuccess for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T03:33:59,106 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started clustering services for node openflow:1 2025-08-16T03:33:59,112 | INFO | ofppool-0 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Triggering reconciliation for device NodeKey{id=Uri{value=openflow:1}} 2025-08-16T03:33:59,126 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:00,150 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:01,169 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:02,189 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:03,209 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:04,228 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:05,248 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:06,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:07,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:08,308 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:09,328 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:10,349 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:11,368 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:12,388 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:13,408 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:14,428 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:15,449 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:16,469 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:17,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:18,508 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:19,529 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:20,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:21,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:22,589 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:23,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:24,629 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:25,649 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:26,671 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:27,689 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:28,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:29,730 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:30,750 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:31,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:32,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:33,809 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:34,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:35,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:36,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:37,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:38,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:39,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:40,949 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:41,969 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:42,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:44,008 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:45,028 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:46,048 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:47,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:48,088 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:49,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:50,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:51,149 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:52,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:53,189 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:54,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:55,229 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:56,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:57,269 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:58,289 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:34:59,308 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:00,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:01,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:02,368 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:03,389 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:04,408 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:05,431 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:06,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:07,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:08,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:09,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:10,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:11,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:12,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:13,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:14,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:15,628 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:16,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:17,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:18,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:19,711 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:20,729 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:21,748 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:22,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:23,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:24,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:25,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:26,848 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:27,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:28,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:29,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:30,932 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:31,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:32,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:33,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:35,008 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:36,029 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:37,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:38,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:39,089 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:39,112 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Follower Node2 2025-08-16T03:35:39,135 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.MacAddress 2025-08-16T03:35:39,136 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.PhysAddress 2025-08-16T03:35:39,137 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.HexString 2025-08-16T03:35:39,137 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.DottedQuad 2025-08-16T03:35:39,138 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.Uuid 2025-08-16T03:35:39,191 | INFO | epollEventLoopGroup-5-2 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.170.99:51140, NodeId:openflow:1 2025-08-16T03:35:39,192 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 disconnected. 2025-08-16T03:35:39,192 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-08-16T03:35:39,192 | WARN | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Reconciliation framework failure for device openflow:1 java.util.concurrent.CancellationException: Task was cancelled. at com.google.common.util.concurrent.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1021) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:288) ~[?:?] at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:235) ~[?:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[?:?] at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:246) ~[?:?] at com.google.common.util.concurrent.Futures.getDone(Futures.java:1175) ~[?:?] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1123) ~[?:?] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1004) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:767) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.cancel(AbstractFuture.java:372) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.cancel(FluentFuture.java:120) ~[?:?] at org.opendaylight.openflowplugin.applications.reconciliation.impl.ReconciliationManagerImpl.cancelNodeReconciliation(ReconciliationManagerImpl.java:138) ~[?:?] at org.opendaylight.openflowplugin.applications.reconciliation.impl.ReconciliationManagerImpl.onDeviceDisconnected(ReconciliationManagerImpl.java:115) ~[?:?] at org.opendaylight.openflowplugin.impl.mastership.MastershipChangeServiceManagerImpl.becomeSlaveOrDisconnect(MastershipChangeServiceManagerImpl.java:101) ~[?:?] at org.opendaylight.openflowplugin.impl.lifecycle.ContextChainHolderImpl.destroyContextChain(ContextChainHolderImpl.java:363) ~[?:?] at org.opendaylight.openflowplugin.impl.lifecycle.ContextChainHolderImpl.onDeviceDisconnected(ContextChainHolderImpl.java:273) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.propagateDeviceDisconnectedEvent(ConnectionContextImpl.java:179) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.disconnectDevice(ConnectionContextImpl.java:168) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.ConnectionContextImpl.onConnectionClosed(ConnectionContextImpl.java:126) ~[?:?] at org.opendaylight.openflowplugin.impl.connection.listener.SystemNotificationsListenerImpl.onDisconnect(SystemNotificationsListenerImpl.java:86) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.ConnectionAdapterImpl.consumeDeviceMessage(ConnectionAdapterImpl.java:121) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.AbstractConnectionAdapterStatistics.consume(AbstractConnectionAdapterStatistics.java:68) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.ConnectionAdapterImpl.consume(ConnectionAdapterImpl.java:62) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.DelegatingInboundHandler.channelInactive(DelegatingInboundHandler.java:53) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81) ~[?:?] at org.opendaylight.openflowjava.protocol.impl.core.connection.AbstractOutboundQueueManager.channelInactive(AbstractOutboundQueueManager.java:169) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81) ~[?:?] at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:284) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:412) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:377) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:412) ~[?:?] at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:377) ~[?:?] at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251) ~[?:?] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1424) ~[?:?] at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:876) ~[?:?] at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:684) ~[?:?] at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:148) ~[?:?] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:141) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:507) ~[?:?] at io.netty.channel.SingleThreadIoEventLoop.run(SingleThreadIoEventLoop.java:182) ~[?:?] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:1073) ~[?:?] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-08-16T03:35:39,195 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-08-16T03:35:39,197 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-08-16T03:35:39,198 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Role SLAVE was granted to device openflow:1 2025-08-16T03:35:39,198 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RoleContextImpl[RUNNING] service for node openflow:1 2025-08-16T03:35:39,198 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping StatisticsContextImpl[RUNNING] service for node openflow:1 2025-08-16T03:35:39,198 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-08-16T03:35:39,198 | WARN | pool-21-thread-1 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Fail with read Config/DS for Node DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] @ urn.opendaylight.flow.inventory.rev130819.FlowCapableNode ] ! java.lang.InterruptedException: null at com.google.common.util.concurrent.AbstractFutureState.blockingGet(AbstractFutureState.java:249) ~[?:?] at com.google.common.util.concurrent.Platform.get(Platform.java:54) ~[?:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:253) ~[?:?] at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91) ~[?:?] at org.opendaylight.openflowplugin.applications.frm.impl.FlowNodeReconciliationImpl$ReconciliationTask.call(FlowNodeReconciliationImpl.java:354) ~[?:?] at org.opendaylight.openflowplugin.applications.frm.impl.FlowNodeReconciliationImpl$ReconciliationTask.call(FlowNodeReconciliationImpl.java:336) ~[?:?] at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:128) ~[?:?] at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74) ~[?:?] at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:80) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.lang.Thread.run(Thread.java:1583) [?:?] 2025-08-16T03:35:39,201 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RpcContextImpl[RUNNING] service for node openflow:1 2025-08-16T03:35:39,204 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping DeviceContextImpl[RUNNING] service for node openflow:1 2025-08-16T03:35:39,207 | INFO | ofppool-0 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services for node openflow:1 2025-08-16T03:35:39,207 | INFO | epollEventLoopGroup-5-2 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services registration for node openflow:1 2025-08-16T03:35:39,208 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating DeviceContextImpl[TERMINATED] service for node openflow:1 2025-08-16T03:35:39,208 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RpcContextImpl[TERMINATED] service for node openflow:1 2025-08-16T03:35:39,208 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating StatisticsContextImpl[TERMINATED] service for node openflow:1 2025-08-16T03:35:39,209 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-08-16T03:35:39,209 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RoleContextImpl[TERMINATED] service for node openflow:1 2025-08-16T03:35:39,334 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-08-16T03:35:39,335 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-08-16T03:35:39,839 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T03:35:40,108 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:41,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:41,763 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Start Mininet Connect To Inventory Leader 2025-08-16T03:35:42,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:43,174 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:44,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:44,559 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify Flows In Switch Connected To Leader 2025-08-16T03:35:44,815 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:35:45,054 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-42 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T03:35:45,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:46,228 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:47,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:48,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:49,289 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:50,309 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:51,329 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:52,349 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:53,368 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:54,386 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:55,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:56,428 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:57,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:58,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:35:59,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:00,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:01,529 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:02,549 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:03,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:04,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:05,610 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:06,629 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:07,650 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:08,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:08,997 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T03:36:09,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:10,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:11,729 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:12,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:13,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:14,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:15,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:16,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:17,850 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:18,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:19,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:20,909 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:21,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:22,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:23,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:24,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:26,008 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:27,028 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:28,048 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:29,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:30,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:31,108 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:32,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:33,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:34,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:35,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:36,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:37,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:38,248 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:39,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:40,290 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:41,308 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:42,328 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:43,349 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:44,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:45,388 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:46,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:47,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:48,446 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:49,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:50,489 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:51,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:52,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:53,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:54,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:55,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:56,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:57,628 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:58,648 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:36:59,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:00,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:01,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:02,728 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:03,749 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:04,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:05,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:06,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:07,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:08,848 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:09,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:10,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:11,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:12,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:13,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:14,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:15,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:17,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:18,029 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:19,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:20,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:21,090 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:22,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:23,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:24,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:24,982 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Stop Mininet Connected To Inventory Leader 2025-08-16T03:37:25,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:25,255 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:37:25,255 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T03:37:25,761 | INFO | node-cleaner-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T03:37:26,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:27,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:27,587 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Delete All Flows From Follower Node1 2025-08-16T03:37:28,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:29,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:30,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:31,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:32,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:33,326 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:34,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:35,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:36,390 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:37,408 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:38,429 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:39,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:40,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:41,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:42,508 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:43,528 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:44,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:45,570 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:46,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:47,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:48,630 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:49,649 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:50,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:51,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:52,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:53,728 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:54,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:55,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:56,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:57,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:58,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:37:59,848 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:00,869 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:01,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:02,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:03,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:04,948 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:05,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:06,991 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:08,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:09,030 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:10,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:11,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:12,088 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:13,108 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:14,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:15,149 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:16,169 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:17,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:18,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:19,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:20,248 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:21,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:22,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:23,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:24,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:25,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:26,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:27,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:28,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:29,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:30,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:31,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:32,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:33,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:34,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:35,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:36,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:37,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:38,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:39,628 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:40,648 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:41,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:42,689 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:43,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:44,728 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:45,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:46,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:47,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:48,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:49,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:50,848 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:51,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:52,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:53,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:54,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:55,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:56,966 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:57,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:38:59,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:00,028 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:01,048 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:02,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:03,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:04,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:05,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:06,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:07,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:08,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:08,564 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify No Flows In Inventory Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Reconcilliation Multi DPN.Verify No Flows In Inventory Leader 2025-08-16T03:39:09,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:10,228 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:11,248 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:12,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:13,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:14,308 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:15,328 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:16,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:17,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:18,388 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:19,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:20,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:21,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:22,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:23,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:24,508 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:25,528 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:26,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:27,569 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:28,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:29,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:30,629 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:31,649 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:32,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:33,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:34,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:35,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:36,748 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:37,769 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:38,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:39,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:40,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:41,850 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:42,868 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:43,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:44,909 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:45,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:46,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:47,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:48,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:50,009 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:51,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:52,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:53,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:54,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:55,109 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:56,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:57,149 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:58,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:39:59,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:00,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:01,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:02,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:03,269 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:04,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:05,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:06,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:07,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:08,369 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:09,388 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:10,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:11,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:12,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:13,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:14,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:15,520 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:16,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:17,558 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:18,580 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:19,598 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:20,618 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:21,637 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:22,657 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:23,679 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:24,698 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:25,720 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:26,738 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:27,759 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:28,778 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:29,799 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:30,817 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:31,838 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:32,857 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:33,880 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:34,898 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:35,918 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:36,937 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:37,958 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:38,978 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:39,997 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:41,018 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:42,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:43,057 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:44,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:45,097 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:46,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:47,137 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:48,158 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:49,179 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:50,198 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:51,217 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:52,051 | INFO | sshd-SshServer[3030c5c9](port=8101)-nio2-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:55032 authenticated 2025-08-16T03:40:52,237 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:52,614 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/020__Cluster_HA_Data_Recovery_BulkFlow_2Node_Cluster.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/020__Cluster_HA_Data_Recovery_BulkFlow_2Node_Cluster.robot 2025-08-16T03:40:53,017 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status and Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status and Initialize Variables 2025-08-16T03:40:53,257 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:54,277 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:55,297 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:56,318 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:57,338 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:58,357 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:40:59,378 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:00,397 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:00,713 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower Before Leader Restart 2025-08-16T03:41:01,417 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:02,437 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:03,458 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:04,478 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:05,498 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:06,517 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:07,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:08,556 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:09,578 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:10,596 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:11,617 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:12,564 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Leader From Cluster Node 2025-08-16T03:41:12,637 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:12,959 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Shutdown 2025-08-16T03:41:13,656 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:14,679 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:15,699 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:16,717 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:17,738 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:18,758 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:19,778 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:20,797 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:21,818 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:22,837 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:23,859 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:24,877 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:25,898 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:26,918 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:27,937 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:28,957 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:29,977 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:30,998 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:32,017 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:33,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:34,057 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:35,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:36,098 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:37,252 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:39,292 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:40,306 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:41,328 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:42,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:43,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:44,386 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:45,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:46,426 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:47,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:48,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:49,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:50,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:51,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:52,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:53,570 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:54,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:55,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:56,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:57,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:58,666 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:41:59,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:00,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:01,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:02,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:03,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:04,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:05,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:06,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:07,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:08,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:09,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:10,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:11,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:12,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:13,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:14,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:16,009 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:17,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:18,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:19,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:20,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:21,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:22,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:23,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:24,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:25,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:26,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:27,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:28,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:29,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:30,286 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:31,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:32,326 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:33,346 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:34,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:35,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:36,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:37,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:38,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:39,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:40,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:41,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:42,526 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:43,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:44,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:45,586 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:46,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:47,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:48,646 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:49,666 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:50,686 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:51,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:52,726 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:53,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:53,894 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shard Status For Leader After PreLeader Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shard Status For Leader After PreLeader Shutdown 2025-08-16T03:42:54,276 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node1 2025-08-16T03:42:54,637 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower 2025-08-16T03:42:54,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:55,061 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader 2025-08-16T03:42:55,497 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Cluster Restart 2025-08-16T03:42:55,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:55,883 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Pre Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Pre Leader From Cluster Node 2025-08-16T03:42:56,262 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Leader Restart 2025-08-16T03:42:56,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:57,826 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:58,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:42:59,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:00,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:01,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:02,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:03,946 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:04,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:05,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:07,009 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:08,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:09,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:10,068 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:11,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:12,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:13,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:14,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:15,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:16,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:17,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:18,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:19,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:20,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:21,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:22,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:23,328 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:24,346 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:25,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:26,386 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:27,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:28,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:29,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:30,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:31,486 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:32,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:33,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:34,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:35,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:36,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:37,609 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:38,626 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:39,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:40,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:41,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:42,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:43,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:44,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:45,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:46,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:47,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:48,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:49,846 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:50,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:51,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:52,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:53,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:54,946 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:55,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:56,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:58,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:43:59,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:00,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:01,066 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:02,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:03,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:04,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:05,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:06,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:07,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:08,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:09,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:10,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:11,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:12,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:13,306 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:14,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:15,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:16,366 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:17,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:18,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:19,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:20,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:21,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:22,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:23,506 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:24,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:25,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:26,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:27,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:28,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:29,626 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:30,646 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:31,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:32,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:33,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:34,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:35,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:36,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:37,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:38,806 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:39,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:40,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:41,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:42,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:43,906 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:44,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:45,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:46,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:48,320 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:49,337 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:50,358 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:51,377 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:52,397 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:53,417 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:54,436 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:55,458 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:56,478 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:57,497 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:58,517 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:44:59,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:00,557 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:01,577 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:02,597 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:03,617 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:04,637 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:05,658 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:06,677 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:07,699 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:08,717 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:09,737 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:10,759 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:11,777 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:12,797 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:13,817 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:14,840 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:15,856 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:16,877 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:17,897 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:18,917 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:19,937 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:20,956 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:21,977 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:22,998 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:24,017 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:25,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:26,058 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:27,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:28,097 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:29,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:30,138 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:31,157 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:32,177 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:33,197 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:34,218 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:35,238 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:36,257 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:37,278 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:38,297 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:39,318 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:40,338 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:41,357 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:42,377 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:43,398 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:44,417 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:45,438 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:46,457 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:47,477 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:48,498 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:49,516 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:50,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:51,557 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:52,578 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:53,597 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:54,619 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:55,637 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:56,657 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:57,677 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:58,697 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:45:59,717 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:00,737 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:01,756 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:02,777 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:03,797 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:04,817 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:05,837 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:06,857 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:07,877 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:08,897 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:09,917 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:10,937 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:11,957 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:12,977 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:13,996 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:15,017 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:16,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:17,058 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:18,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:19,097 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:20,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:21,137 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:22,157 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:23,177 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:24,196 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:25,217 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:26,237 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:27,257 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:28,278 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:29,296 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:30,317 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:31,337 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:32,357 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:33,378 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:34,397 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:35,417 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:36,436 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:37,457 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:38,478 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:39,497 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:40,517 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:41,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:42,559 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:43,577 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:44,597 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:45,617 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:46,637 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:47,657 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:48,677 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:49,696 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:50,717 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:51,737 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:52,757 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:53,777 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:54,796 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:55,817 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:56,837 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:57,857 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:58,876 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:46:59,901 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:00,917 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:01,937 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:02,956 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:03,977 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:04,998 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:06,017 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:07,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:08,057 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:09,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:10,097 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:11,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:12,137 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:13,157 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:14,177 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:15,199 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:16,217 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:17,237 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:18,256 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:19,277 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:20,297 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:21,317 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:22,336 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:23,357 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:24,377 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:25,397 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:26,416 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:27,436 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:28,457 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:29,476 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:30,496 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:31,516 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:32,537 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:33,556 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:34,577 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:35,598 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:36,617 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:37,638 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:38,656 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:39,677 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:40,697 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:41,718 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:42,737 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:43,757 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:44,778 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:45,810 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:46,830 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:47,848 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:48,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:49,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:50,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:51,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:52,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:53,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:54,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:56,008 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:57,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:58,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:47:59,070 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:00,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:01,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:02,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:03,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:04,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:05,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:06,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:07,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:08,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:09,266 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:10,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:11,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:12,329 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:13,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:14,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:15,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:16,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:17,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:18,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:19,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:20,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:21,506 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:22,528 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:23,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:24,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:25,589 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:26,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:27,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:28,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:29,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:30,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:31,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:32,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:33,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:34,766 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:35,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:36,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:37,826 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:38,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:39,872 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:40,886 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:41,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:42,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:43,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:44,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:45,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:47,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:48,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:49,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:50,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:51,086 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:52,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:53,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:54,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:54,528 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Leader Restart 2025-08-16T03:48:55,166 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:56,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:57,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:58,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:48:59,246 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:00,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:01,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:02,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:03,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:04,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:05,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:06,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:07,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:08,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:09,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:10,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:11,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:12,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:13,526 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:14,546 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:15,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:16,586 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:17,606 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:18,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:19,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:20,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:21,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:22,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:23,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:24,746 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:25,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:26,786 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:27,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:28,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:29,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:30,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:31,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:32,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:33,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:34,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:35,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:36,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:38,006 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:39,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:40,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:41,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:42,086 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:43,106 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:44,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:45,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:46,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:47,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:48,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:49,226 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:50,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:51,266 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:52,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:53,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:54,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:55,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:56,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:57,386 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:58,409 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:49:59,426 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:00,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:01,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:02,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:03,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:04,526 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:05,546 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:06,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:07,586 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:08,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:09,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:10,646 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:11,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:12,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:13,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:14,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:15,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:16,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:17,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:18,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:19,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:20,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:21,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:22,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:23,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:24,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:25,951 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:26,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:27,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:29,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:30,028 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:31,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:32,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:33,088 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:34,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:35,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:35,894 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Leader Restart 2025-08-16T03:50:36,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:36,309 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node1 2025-08-16T03:50:36,671 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node1 2025-08-16T03:50:37,030 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node 2025-08-16T03:50:37,169 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:37,451 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower And Leader Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Inventory Follower And Leader Before Cluster Restart 2025-08-16T03:50:38,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:39,209 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:40,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:41,246 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:42,266 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:43,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:44,308 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:45,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:46,346 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:47,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:48,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:49,263 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Follower From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Shutdown Follower From Cluster Node 2025-08-16T03:50:49,408 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:49,662 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Shutdown" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Shutdown 2025-08-16T03:50:50,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:51,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:52,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:53,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:54,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:55,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:56,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:57,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:58,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:50:59,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:00,628 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:01,648 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:02,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:03,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:04,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:05,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:06,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:07,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:08,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:09,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:10,826 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:11,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:12,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:13,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:14,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:15,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:16,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:17,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:18,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:20,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:21,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:22,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:23,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:24,089 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:25,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:26,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:27,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:28,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:29,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:30,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:31,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:32,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:33,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:34,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:35,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:36,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:37,350 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:38,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:39,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:40,406 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:41,426 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:42,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:43,468 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:44,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:45,508 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:46,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:47,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:48,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:49,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:50,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:51,629 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:52,646 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:53,666 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:54,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:55,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:56,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:57,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:58,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:51:59,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:00,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:01,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:02,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:03,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:04,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:05,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:06,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:07,948 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:08,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:09,997 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:11,017 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:12,037 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:13,058 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:14,077 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:15,096 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:16,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:17,138 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:18,157 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:19,176 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:20,197 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:21,216 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:22,237 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:23,257 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:24,277 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:25,297 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:26,317 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:27,337 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:28,358 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:29,377 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:30,412 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:30,539 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Start Mininet Connect To Follower Node 2025-08-16T03:52:30,906 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Add Bulk Flow From Follower Node1 2025-08-16T03:52:31,350 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Get Bulk Flows And Verify In Leader Before Follower Restart 2025-08-16T03:52:31,428 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:31,751 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch Before Follower Restart 2025-08-16T03:52:32,109 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Follower From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Restart Follower From Cluster Node 2025-08-16T03:52:32,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:32,491 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Check Shards Status After Follower Restart 2025-08-16T03:52:33,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:34,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:35,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:36,528 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:37,546 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:38,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:39,586 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:40,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:41,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:42,646 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:43,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:44,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:45,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:46,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:47,748 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:48,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:49,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:50,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:51,826 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:52,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:53,866 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:54,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:55,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:56,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:57,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:58,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:52:59,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:01,008 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:02,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:03,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:04,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:05,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:06,108 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:07,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:08,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:09,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:10,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:11,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:12,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:13,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:14,266 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:15,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:16,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:17,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:18,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:19,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:20,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:21,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:22,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:23,448 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:24,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:25,486 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:26,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:27,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:28,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:29,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:30,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:31,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:32,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:33,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:34,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:35,688 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:36,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:37,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:38,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:39,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:40,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:41,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:42,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:43,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:44,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:45,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:46,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:47,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:48,948 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:49,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:50,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:52,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:53,028 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:54,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:55,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:56,092 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:57,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:58,126 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:53:59,146 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:00,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:01,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:02,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:03,226 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:04,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:05,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:06,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:07,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:08,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:09,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:10,366 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:11,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:12,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:13,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:14,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:15,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:16,486 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:17,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:18,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:19,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:20,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:21,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:22,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:23,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:24,648 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:25,666 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:26,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:27,706 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:28,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:29,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:30,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:31,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:32,806 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:33,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:34,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:35,866 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:36,890 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:37,908 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:38,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:39,948 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:40,966 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:41,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:43,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:44,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:45,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:46,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:47,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:48,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:49,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:50,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:51,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:52,186 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:53,206 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:54,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:55,248 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:56,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:57,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:58,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:54:59,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:00,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:01,366 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:02,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:03,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:04,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:05,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:06,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:07,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:08,509 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:09,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:10,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:11,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:12,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:13,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:14,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:15,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:16,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:17,688 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:19,125 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:20,412 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:21,428 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:22,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:23,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:24,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:25,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:26,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:27,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:28,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:29,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:30,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:31,626 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:32,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:33,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:34,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:35,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:36,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:37,746 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:38,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:39,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:40,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:41,826 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:42,846 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:43,869 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:44,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:45,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:46,926 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:47,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:48,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:49,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:51,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:52,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:53,049 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:54,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:55,086 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:56,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:57,130 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:58,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:55:59,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:00,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:01,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:02,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:03,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:04,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:05,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:06,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:07,329 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:08,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:09,366 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:10,386 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:11,408 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:12,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:13,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:14,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:15,490 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:16,508 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:17,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:18,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:19,568 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:20,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:21,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:22,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:23,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:24,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:25,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:26,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:27,728 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:28,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:29,766 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:30,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:31,806 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:32,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:33,846 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:34,869 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:35,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:36,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:37,927 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:38,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:39,966 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:40,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:42,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:43,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:44,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:45,066 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:46,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:47,106 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:48,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:49,146 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:50,167 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:51,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:52,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:53,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:54,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:55,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:56,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:57,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:58,326 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:56:59,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:00,366 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:01,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:02,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:03,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:04,446 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:05,466 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:06,488 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:07,506 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:08,526 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:09,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:10,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:11,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:12,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:13,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:14,648 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:15,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:16,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:17,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:18,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:19,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:20,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:21,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:22,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:23,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:24,846 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:25,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:26,889 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:27,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:28,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:29,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:30,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:31,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:33,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:34,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:35,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:36,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:37,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:38,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:39,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:40,148 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:41,168 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:42,188 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:43,208 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:44,228 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:45,246 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:46,267 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:47,288 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:48,307 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:49,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:50,347 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:51,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:52,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:53,406 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:54,431 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:55,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:56,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:57,486 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:58,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:57:59,526 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:00,546 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:01,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:02,588 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:03,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:04,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:05,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:06,668 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:07,688 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:08,708 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:09,728 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:10,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:11,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:12,789 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:13,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:14,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:15,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:16,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:17,888 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:18,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:19,926 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:20,946 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:21,968 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:22,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:24,006 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:25,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:26,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:27,544 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:29,107 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:30,117 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:30,825 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Data Recovery After Follower Restart 2025-08-16T03:58:32,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:33,234 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:34,391 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:35,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:36,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:37,446 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:38,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:39,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:40,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:41,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:42,548 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:43,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:44,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:45,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:46,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:47,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:48,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:49,689 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:50,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:51,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:52,748 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:53,768 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:54,787 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:55,807 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:56,827 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:57,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:58,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:58:59,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:00,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:01,926 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:02,947 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:03,966 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:04,988 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:06,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:07,027 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:08,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:09,066 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:10,086 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:11,106 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:12,128 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:13,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:14,166 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:15,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:16,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:17,227 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:18,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:19,268 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:20,287 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:21,306 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:22,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:23,346 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:24,367 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:25,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:26,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:27,428 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:28,446 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:29,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:30,486 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:31,506 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:32,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:33,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:34,567 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:35,587 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:36,607 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:37,628 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:38,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:39,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:40,686 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:41,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:42,727 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:43,747 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:44,767 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:45,788 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:46,808 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:47,828 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:48,847 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:49,867 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:50,887 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:51,907 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:52,928 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:53,949 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:54,967 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:55,986 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:57,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:58,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T03:59:59,047 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:00,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:01,087 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:02,106 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:03,127 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:04,147 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:05,166 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:06,187 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:07,207 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:08,226 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:09,247 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:10,266 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:11,312 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:12,327 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:13,348 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:13,407 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify Flows In Switch After Follower Restart 2025-08-16T04:00:13,767 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Stop Mininet Connected To Follower Node 2025-08-16T04:00:14,131 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Delete All Flows From Follower Node 2025-08-16T04:00:14,368 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:14,525 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node After Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow 2Node Cluster.Verify No Flows In Leader Node After Follower Restart 2025-08-16T04:00:15,387 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:16,407 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:17,249 | INFO | sshd-SshServer[3030c5c9](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:45738 authenticated 2025-08-16T04:00:17,427 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:17,727 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/030__Cluster_HA_Data_Recovery_BulkFlow_Single_Switch.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/030__Cluster_HA_Data_Recovery_BulkFlow_Single_Switch.robot 2025-08-16T04:00:18,086 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Check Shards Status And Initialize Variables" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Check Shards Status And Initialize Variables 2025-08-16T04:00:18,447 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:19,467 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:20,487 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:21,507 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:22,527 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:23,547 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:24,566 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:25,590 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:25,641 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before Cluster Restart 2025-08-16T04:00:26,608 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:27,627 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:28,647 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:29,667 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:30,687 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:31,707 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:32,726 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:33,973 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:34,987 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:36,007 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:37,026 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:37,413 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node1 2025-08-16T04:00:37,775 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower 2025-08-16T04:00:38,046 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more 2025-08-16T04:00:38,133 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster 2025-08-16T04:00:38,523 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Cluster Restart 2025-08-16T04:00:38,846 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill All Cluster Nodes" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill All Cluster Nodes 2025-08-16T04:00:39,067 | WARN | ForkJoinPool.commonPool-worker-2 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Connection attempt failed at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.onConnectResponse(AbstractShardBackendResolver.java:185) ~[?:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:178) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) [?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) [?:?] at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483) [?:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.access.commands.NotLeaderException: Actor Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-2-shard-inventory-config#301213999] is not the current leader at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:518) ~[?:?] at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:333) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommandImpl(RaftActor.java:344) ~[?:?] at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:291) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[?:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[?:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[?:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[?:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[?:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[?:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[?:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[?:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[?:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[?:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[?:?] ... 5 more Aug 16, 2025 4:01:16 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Trying to lock /tmp/karaf-0.23.0/lock Aug 16, 2025 4:01:16 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Lock acquired Aug 16, 2025 4:01:16 AM org.apache.karaf.main.Main$KarafLockCallback lockAcquired INFO: Lock acquired. Setting startlevel to 100 2025-08-16T04:01:17,294 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.logging]) | EventAdminConfigurationNotifier | 4 - org.ops4j.pax.logging.pax-logging-log4j2 - 2.2.8 | Logging configuration changed. (Event Admin service unavailable - no notification sent). 2025-08-16T04:01:17,399 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.cm/1.3.2 has been started 2025-08-16T04:01:17,446 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.core/1.10.3 has been started 2025-08-16T04:01:17,515 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Starting JMX OSGi agent 2025-08-16T04:01:17,530 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866] for service with service.id [15] 2025-08-16T04:01:17,540 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866] for service with service.id [40] 2025-08-16T04:01:17,553 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Starting with globalExtender setting: false 2025-08-16T04:01:17,556 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Version = 2.2.6 2025-08-16T04:01:17,652 | INFO | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | Setting java.rmi.server.hostname system property to 127.0.0.1 2025-08-16T04:01:17,761 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,762 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,763 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,763 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,763 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,765 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,765 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@2a4a83c1 with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 2025-08-16T04:01:17,770 | INFO | activator-1-thread-2 | Activator | 99 - org.apache.karaf.deployer.features - 4.4.7 | Deployment finished. Registering FeatureDeploymentListener 2025-08-16T04:01:17,774 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-08-16T04:01:17,868 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.bundle.core/4.4.7 2025-08-16T04:01:17,892 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.config.command/4.4.7 2025-08-16T04:01:17,977 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.deployer.kar/4.4.7 2025-08-16T04:01:17,979 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T04:01:17,988 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.features.command/4.4.7 2025-08-16T04:01:17,991 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.http.core/4.4.7. Missing service: [org.apache.karaf.http.core.ProxyService] 2025-08-16T04:01:17,997 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.instance.core/4.4.7 2025-08-16T04:01:18,005 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:01:18,006 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:01:18,007 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:01:18,015 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.kar.core/4.4.7 2025-08-16T04:01:18,020 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-08-16T04:01:18,021 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.package.core/4.4.7 2025-08-16T04:01:18,024 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.service.core/4.4.7 2025-08-16T04:01:18,030 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T04:01:18,030 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T04:01:18,033 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Activator | 120 - org.apache.karaf.shell.core - 4.4.7 | Not starting local console. To activate set karaf.startLocalConsole=true 2025-08-16T04:01:18,069 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.karaf.shell.core/4.4.7 has been started 2025-08-16T04:01:18,104 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.4.7. Missing service: [org.apache.sshd.server.SshServer] 2025-08-16T04:01:18,150 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.system.core/4.4.7 2025-08-16T04:01:18,178 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.web.core/4.4.7. Missing service: [org.apache.karaf.web.WebContainerService] 2025-08-16T04:01:18,219 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Activator | 392 - org.ops4j.pax.web.pax-web-extender-war - 8.0.30 | Configuring WAR extender thread pool. Pool size = 3 2025-08-16T04:01:18,312 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.ssh/4.4.7 2025-08-16T04:01:18,319 | INFO | activator-1-thread-1 | DefaultIoServiceFactoryFactory | 125 - org.apache.sshd.osgi - 2.14.0 | No detected/configured IoServiceFactoryFactory; using Nio2ServiceFactoryFactory 2025-08-16T04:01:18,385 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Activator | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | Starting Pax Web Whiteboard Extender 2025-08-16T04:01:18,415 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | log | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Logging initialized @3392ms to org.eclipse.jetty.util.log.Slf4jLog 2025-08-16T04:01:18,434 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.web]) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because configuration has changed 2025-08-16T04:01:18,436 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | EventAdmin support enabled, WAB events will be posted to EventAdmin topics. 2025-08-16T04:01:18,436 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Pax Web Runtime started 2025-08-16T04:01:18,437 | INFO | paxweb-config-3-thread-1 (change config) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because ServerControllerFactory has been registered 2025-08-16T04:01:18,446 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Starting BlueprintBundleTracker 2025-08-16T04:01:18,458 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.cm_1.3.2 [78] was successfully created 2025-08-16T04:01:18,468 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.karaf.shell.core_4.4.7 [120] was successfully created 2025-08-16T04:01:18,469 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.core_1.10.3 [79] was successfully created 2025-08-16T04:01:18,494 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Configuring server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T04:01:18,495 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Configuring JettyServerController{configuration=8578791c-c6c2-46ae-bc7a-8089d608f668,state=UNCONFIGURED} 2025-08-16T04:01:18,498 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating Jetty server instance using configuration properties. 2025-08-16T04:01:18,524 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Processing Jetty configuration from files: [etc/jetty.xml] 2025-08-16T04:01:18,593 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jdbc.core/4.4.7 2025-08-16T04:01:18,714 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Found configured connector "jetty-default": 0.0.0.0:8181 2025-08-16T04:01:18,716 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Using configured jetty-default@586a0c3e{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} as non secure connector for address: 0.0.0.0:8181 2025-08-16T04:01:18,717 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Eagerly starting Jetty thread pool QueuedThreadPool[qtp230590180]@dbe86e4{STOPPED,0<=0<=200,i=0,r=-1,q=0}[NO_TRY] 2025-08-16T04:01:18,723 | INFO | paxweb-config-3-thread-1 (change controller) | JettyFactory | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding JMX support to Jetty server 2025-08-16T04:01:18,729 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-08-16T04:01:18,744 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CustomFilterAdapterConfigurationImpl | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Custom filter properties updated: {service.pid=org.opendaylight.aaa.filterchain, osgi.ds.satisfying.condition.target=(osgi.condition.id=true), customFilterList=, component.name=org.opendaylight.aaa.filterchain.configuration.impl.CustomFilterAdapterConfigurationImpl, felix.fileinstall.filename=file:/tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg, component.id=4, Filter.target=(org.opendaylight.aaa.filterchain.filter=true)} 2025-08-16T04:01:18,767 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.AuthenticationService), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:18,772 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Starting server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T04:01:18,772 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting JettyServerController{configuration=8578791c-c6c2-46ae-bc7a-8089d608f668,state=STOPPED} 2025-08-16T04:01:18,772 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Server@603c42f8{STOPPED}[9.4.57.v20241219] 2025-08-16T04:01:18,774 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 21.0.5+11-Ubuntu-1ubuntu122.04 2025-08-16T04:01:18,795 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BundleWhiteboardApplication | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | No matching target context(s) for Whiteboard element ServletModel{id=ServletModel-3,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[]}. Filter: (osgi.http.whiteboard.context.name=default). Element may be re-registered later, when matching context/s is/are registered. 2025-08-16T04:01:18,803 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | DefaultSessionIdManager workerName=node0 2025-08-16T04:01:18,803 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | No SessionScavenger set, using defaults 2025-08-16T04:01:18,804 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | node0 Scavenging every 660000ms 2025-08-16T04:01:18,815 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:18,833 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:18,852 | INFO | paxweb-config-3-thread-1 (change controller) | AbstractConnector | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started jetty-default@586a0c3e{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} 2025-08-16T04:01:18,853 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started @3837ms 2025-08-16T04:01:18,854 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpService factory 2025-08-16T04:01:18,858 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.http.core_4.4.7 [105]] 2025-08-16T04:01:18,858 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | FileAkkaConfigurationReader | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | File-based Pekko configuration reader enabled 2025-08-16T04:01:18,872 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider starting 2025-08-16T04:01:18,881 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.web.core_4.4.7 [124]] 2025-08-16T04:01:18,889 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.http.core/4.4.7 2025-08-16T04:01:18,894 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.web.core/4.4.7 2025-08-16T04:01:18,904 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.jolokia.osgi_1.7.2 [155]] 2025-08-16T04:01:18,909 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard_8.0.30 [393]] 2025-08-16T04:01:18,909 | INFO | HttpService->WarExtender (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-war_8.0.30 [392]] 2025-08-16T04:01:18,916 | INFO | paxweb-config-3-thread-1 (change controller) | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-5,contextPath='/'} 2025-08-16T04:01:18,916 | INFO | paxweb-config-3-thread-1 (change controller) | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@23b75616,contexts=[{HS,OCM-6,context:26934311,/}]} 2025-08-16T04:01:18,917 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@23b75616,contexts=null}", size=4} 2025-08-16T04:01:18,917 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-5,contextPath='/'} 2025-08-16T04:01:18,942 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{HS,id=OCM-6,name='context:26934311',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:26934311',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@19afc27}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@28849b30{/,null,STOPPED} 2025-08-16T04:01:18,945 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@28849b30{/,null,STOPPED} 2025-08-16T04:01:18,945 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@23b75616,contexts=[{HS,OCM-6,context:26934311,/}]} 2025-08-16T04:01:18,949 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/" with default Osgi Context OsgiContextModel{HS,id=OCM-6,name='context:26934311',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:26934311',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@19afc27}} 2025-08-16T04:01:18,965 | INFO | paxweb-config-3-thread-1 (change controller) | osgi | 155 - org.jolokia.osgi - 1.7.2 | No access restrictor found, access to any MBean is allowed 2025-08-16T04:01:19,021 | INFO | paxweb-config-3-thread-1 (change controller) | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@28849b30{/,null,AVAILABLE} 2025-08-16T04:01:19,021 | INFO | paxweb-config-3-thread-1 (change controller) | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{HS,id=OCM-6,name='context:26934311',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:26934311',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@19afc27}}} as OSGi service for "/" context path 2025-08-16T04:01:19,024 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpServiceRuntime 2025-08-16T04:01:19,028 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)}", size=1} 2025-08-16T04:01:19,029 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)} to o.o.p.w.s.j.i.PaxWebServletContextHandler@28849b30{/,null,AVAILABLE} 2025-08-16T04:01:19,031 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.aaa.shiro_0.21.0 [172]] 2025-08-16T04:01:19,033 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-3,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T04:01:19,033 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-3,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]}", size=1} 2025-08-16T04:01:19,033 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-3,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T04:01:19,158 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | ActorSystemProviderImpl | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating new ActorSystem 2025-08-16T04:01:19,438 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Slf4jLogger | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Slf4jLogger started 2025-08-16T04:01:19,690 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | ArteryTransport | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Remoting started with transport [Artery tcp]; listening on address [pekko://opendaylight-cluster-data@10.30.170.30:2550] with UID [2900354809329343885] 2025-08-16T04:01:19,691 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Starting up, Pekko version [1.0.3] ... 2025-08-16T04:01:19,775 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Registered cluster JMX MBean [pekko:type=Cluster] 2025-08-16T04:01:19,775 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Started up successfully 2025-08-16T04:01:19,894 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR started. Config: strategy [KeepMajority], stable-after [7 seconds], down-all-when-unstable [5250 milliseconds], selfUniqueAddress [pekko://opendaylight-cluster-data@10.30.170.30:2550#2900354809329343885], selfDc [default]. 2025-08-16T04:01:19,973 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider started 2025-08-16T04:01:19,997 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | FileModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Shard configuration provider started 2025-08-16T04:01:20,039 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4. Missing service: [org.opendaylight.infrautils.diagstatus.DiagStatusServiceMBean] 2025-08-16T04:01:20,122 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-84969709]], but this node is not initialized yet 2025-08-16T04:01:20,211 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | ThreadFactory created: SystemReadyService 2025-08-16T04:01:20,213 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | Now starting to provide full system readiness status updates (see TestBundleDiag's logs)... 2025-08-16T04:01:20,213 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DiagStatusServiceImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service started 2025-08-16T04:01:20,218 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | MBeanUtils | 198 - org.opendaylight.infrautils.diagstatus-api - 7.1.4 | MBean registration for org.opendaylight.infrautils.diagstatus:type=SvcStatus SUCCESSFUL. 2025-08-16T04:01:20,218 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DiagStatusServiceMBeanImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service management started 2025-08-16T04:01:20,218 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-08-16T04:01:20,220 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos() started... 2025-08-16T04:01:20,241 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#751062405]], but this node is not initialized yet 2025-08-16T04:01:20,241 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#751062405]], but this node is not initialized yet 2025-08-16T04:01:20,254 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.openflowplugin.api.openflow.mastership.MastershipChangeServiceManager), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,264 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:20,269 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0. Missing service: [org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager] 2025-08-16T04:01:20,273 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.NotificationService), (objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:20,311 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationServiceFactory), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:20,318 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:20,319 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,319 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,323 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | ReconciliationManager started 2025-08-16T04:01:20,323 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-08-16T04:01:20,324 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,328 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | MessageIntelligenceAgencyImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Registered MBean org.opendaylight.openflowplugin.impl.statistics.ofpspecific:type=MessageIntelligenceAgencyMXBean 2025-08-16T04:01:20,330 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.impl/0.20.0 2025-08-16T04:01:20,349 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,349 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:20,351 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OpenflowServiceRecoveryHandlerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Registering openflowplugin service recovery handlers 2025-08-16T04:01:20,355 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0. Missing service: [org.opendaylight.serviceutils.srm.spi.RegistryControl, org.opendaylight.mdsal.binding.api.DataBroker] 2025-08-16T04:01:20,361 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | SimpleBindingDOMCodecFactory | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Binding/DOM Codec enabled 2025-08-16T04:01:20,367 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activating 2025-08-16T04:01:20,369 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activated 2025-08-16T04:01:20,373 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DefaultBindingRuntimeGenerator | 328 - org.opendaylight.yangtools.binding-generator - 14.0.14 | Binding/YANG type support activated 2025-08-16T04:01:20,381 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activating 2025-08-16T04:01:20,382 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activated 2025-08-16T04:01:20,458 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime starting 2025-08-16T04:01:20,477 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | KarafFeaturesSupport | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Will attempt to integrate with Karaf FeaturesService 2025-08-16T04:01:20,968 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | NettyTransportSupport | 284 - org.opendaylight.netconf.transport-api - 9.0.0 | Netty transport backed by epoll(2) 2025-08-16T04:01:21,164 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | SharedEffectiveModelContextFactory | 379 - org.opendaylight.yangtools.yang-parser-impl - 14.0.14 | Using weak references 2025-08-16T04:01:23,191 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiModuleInfoSnapshotImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | EffectiveModelContext generation 1 activated 2025-08-16T04:01:23,192 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | DOM Schema services activated 2025-08-16T04:01:23,193 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | Updating context to generation 1 2025-08-16T04:01:23,197 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DOMRpcRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM RPC/Action router started 2025-08-16T04:01:23,204 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service starting 2025-08-16T04:01:23,207 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service started 2025-08-16T04:01:23,277 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-33 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage MAPPED 2025-08-16T04:01:23,993 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingRuntimeContextImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | BindingRuntimeContext generation 1 activated 2025-08-16T04:01:24,011 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBindingDOMCodecServicesImpl | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec generation 1 activated 2025-08-16T04:01:24,011 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | GlobalBindingDOMCodecServices | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Global Binding/DOM Codec activated with generation 1 2025-08-16T04:01:24,017 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiDatastoreContextIntrospectorFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore Context Introspector activated 2025-08-16T04:01:24,019 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION starting 2025-08-16T04:01:24,286 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : config 2025-08-16T04:01:24,287 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:01:24,288 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:01:24,294 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-config 2025-08-16T04:01:24,334 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-config 2025-08-16T04:01:24,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Recovery complete 2025-08-16T04:01:24,414 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store config is using tell-based protocol 2025-08-16T04:01:24,417 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:01:24,418 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:01:24,423 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL starting 2025-08-16T04:01:24,424 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : operational 2025-08-16T04:01:24,424 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-operational 2025-08-16T04:01:24,430 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-config: Shard created, persistent : true 2025-08-16T04:01:24,432 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-operational 2025-08-16T04:01:24,434 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-config: Shard created, persistent : true 2025-08-16T04:01:24,435 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-config: Shard created, persistent : true 2025-08-16T04:01:24,441 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: Shard created, persistent : true 2025-08-16T04:01:24,459 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Recovery complete 2025-08-16T04:01:24,464 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store operational is using tell-based protocol 2025-08-16T04:01:24,468 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config/member-3-shard-topology-config-notifier#558032949 created and ready for shard:member-3-shard-topology-config 2025-08-16T04:01:24,469 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config/member-3-shard-inventory-config-notifier#-1518734938 created and ready for shard:member-3-shard-inventory-config 2025-08-16T04:01:24,469 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config/member-3-shard-default-config-notifier#1136774802 created and ready for shard:member-3-shard-default-config 2025-08-16T04:01:24,469 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config/member-3-shard-toaster-config-notifier#-210587784 created and ready for shard:member-3-shard-toaster-config 2025-08-16T04:01:24,470 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiBlockingBindingNormalizer | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter activated 2025-08-16T04:01:24,472 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Starting recovery with journal batch size 1 2025-08-16T04:01:24,473 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Shard created, persistent : false 2025-08-16T04:01:24,473 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Starting recovery with journal batch size 1 2025-08-16T04:01:24,473 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Starting recovery with journal batch size 1 2025-08-16T04:01:24,474 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational/member-3-shard-default-operational-notifier#-1354840324 created and ready for shard:member-3-shard-default-operational 2025-08-16T04:01:24,475 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Starting recovery with journal batch size 1 2025-08-16T04:01:24,476 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Starting recovery with journal batch size 1 2025-08-16T04:01:24,493 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-operational: Shard created, persistent : false 2025-08-16T04:01:24,494 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for MountPointService activated 2025-08-16T04:01:24,494 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational/member-3-shard-topology-operational-notifier#153155892 created and ready for shard:member-3-shard-topology-operational 2025-08-16T04:01:24,495 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Starting recovery with journal batch size 1 2025-08-16T04:01:24,501 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-44 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage DISK 2025-08-16T04:01:24,503 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-operational: Shard created, persistent : false 2025-08-16T04:01:24,504 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-operational: Shard created, persistent : false 2025-08-16T04:01:24,504 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational/member-3-shard-inventory-operational-notifier#1760729599 created and ready for shard:member-3-shard-inventory-operational 2025-08-16T04:01:24,505 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational/member-3-shard-toaster-operational-notifier#-717081100 created and ready for shard:member-3-shard-toaster-operational 2025-08-16T04:01:24,505 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Starting recovery with journal batch size 1 2025-08-16T04:01:24,505 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DOMNotificationRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM Notification Router started 2025-08-16T04:01:24,508 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,508 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationService activated 2025-08-16T04:01:24,510 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Starting recovery with journal batch size 1 2025-08-16T04:01:24,512 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationPublishService activated 2025-08-16T04:01:24,512 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,513 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:24,514 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,514 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcService activated 2025-08-16T04:01:24,515 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-08-16T04:01:24,519 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcProviderService activated 2025-08-16T04:01:24,548 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: journal open: applyTo=0 2025-08-16T04:01:24,549 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: journal open: applyTo=0 2025-08-16T04:01:24,549 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: journal open: applyTo=0 2025-08-16T04:01:24,549 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: journal open: applyTo=0 2025-08-16T04:01:24,550 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: journal open: applyTo=0 2025-08-16T04:01:24,551 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: journal open: applyTo=0 2025-08-16T04:01:24,554 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: journal open: applyTo=4 2025-08-16T04:01:24,554 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: journal open: applyTo=79 2025-08-16T04:01:24,575 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,575 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,576 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,578 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,581 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,583 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,586 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,589 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:01:24,591 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from null to Follower 2025-08-16T04:01:24,592 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from null to Follower 2025-08-16T04:01:24,592 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from null to Follower 2025-08-16T04:01:24,592 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , received role change from null to Follower 2025-08-16T04:01:24,592 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , received role change from null to Follower 2025-08-16T04:01:24,595 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:01:24,596 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from null to Follower 2025-08-16T04:01:24,598 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:01:24,599 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from null to Follower 2025-08-16T04:01:24,599 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-topology-config from null to Follower 2025-08-16T04:01:24,599 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:01:24,600 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:01:24,600 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:01:24,600 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from null to Follower 2025-08-16T04:01:24,600 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:01:24,600 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-toaster-operational from null to Follower 2025-08-16T04:01:24,601 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from null to Follower 2025-08-16T04:01:24,601 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from null to Follower 2025-08-16T04:01:24,615 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,615 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:01:24,615 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from null to Follower 2025-08-16T04:01:24,616 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:01:24,616 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from null to Follower 2025-08-16T04:01:24,619 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionService activated 2025-08-16T04:01:24,629 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionProviderService activated 2025-08-16T04:01:24,630 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | DynamicBindingAdapter | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | 8 DOMService trackers started 2025-08-16T04:01:24,633 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,633 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:01:24,636 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | GlobalBindingRuntimeContext | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Global BindingRuntimeContext generation 1 activated 2025-08-16T04:01:24,637 | INFO | Start Level: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime started 2025-08-16T04:01:24,664 | INFO | Framework Event Dispatcher: Equinox Container: e0d15c05-d43b-4f0a-8ca6-f3fb6eb02866 | Main | 3 - org.ops4j.pax.logging.pax-logging-api - 2.2.8 | Karaf started in 8s. Bundle stats: 399 active, 400 total 2025-08-16T04:01:24,709 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from null to Follower 2025-08-16T04:01:24,710 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:01:24,711 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from null to Follower 2025-08-16T04:01:29,617 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | CandidateRegistryInit | 185 - org.opendaylight.controller.eos-dom-akka - 11.0.0 | member-3 : Initial removal of candidates from previous iteration failed. Rescheduling. java.util.concurrent.TimeoutException: Ask timed out on [Actor[pekko://opendaylight-cluster-data/system/singletonProxyOwnerSupervisor-no-dc#398745478]] after [5000 ms]. Message of type [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesForMember]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. at org.apache.pekko.actor.typed.scaladsl.AskPattern$.$anonfun$onTimeout$1(AskPattern.scala:141) ~[bundleFile:?] at org.apache.pekko.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:737) ~[bundleFile:?] at org.apache.pekko.actor.Scheduler$$anon$7.run(Scheduler.scala:491) ~[bundleFile:?] at scala.concurrent.ExecutionContext$parasitic$.execute(ExecutionContext.scala:222) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:384) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.executeBucket$1(LightArrayRevolverScheduler.scala:332) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:336) ~[bundleFile:?] at org.apache.pekko.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:288) ~[bundleFile:?] at java.lang.Thread.run(Thread.java:1583) ~[?:?] 2025-08-16T04:01:32,215 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/system/cluster/core/daemon/joinSeedNodeProcess-1#-84969709]], but this node is not initialized yet 2025-08-16T04:01:32,299 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoinAck message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon#17800632]] to [pekko://opendaylight-cluster-data@10.30.170.30:2550] 2025-08-16T04:01:32,362 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Welcome from [pekko://opendaylight-cluster-data@10.30.170.62:2550] 2025-08-16T04:01:32,371 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:01:32,372 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:01:32,372 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:01:32,372 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:01:32,372 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:01:32,372 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-1-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:01:32,373 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-1-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:01:32,373 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-1-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:01:32,373 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-1-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:01:32,371 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:01:32,373 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#1078420212] was unhandled. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/action-registry/gossiper#1078420212] was unhandled. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#1131532458] was unhandled. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-1-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | LocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ClusterEvent$MemberJoined] to Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#1131532458] was unhandled. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:01:32,374 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:01:32,375 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-1-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:01:32,375 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-1-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:01:32,375 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-1-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:01:32,417 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data@10.30.170.62:2550/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-08-16T04:01:32,484 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.eos.akka.owner.supervisor.command.ClearCandidatesResponse] to Actor[pekko://opendaylight-cluster-data/temp/$a] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/temp/$a] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:01:33,031 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T04:01:33,031 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T04:01:33,031 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T04:01:33,031 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T04:01:33,031 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T04:01:33,032 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-08-16T04:01:33,033 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T04:01:33,035 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config 2025-08-16T04:01:33,035 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config 2025-08-16T04:01:33,035 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-08-16T04:01:33,035 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-08-16T04:01:33,040 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | ClusterSingletonManager state change [Start -> Younger] 2025-08-16T04:01:33,819 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-inventory-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,830 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-toaster-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,857 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@211179e9 2025-08-16T04:01:33,858 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-topology-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,859 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@239542d2 2025-08-16T04:01:33,860 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done false 2025-08-16T04:01:33,860 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done false 2025-08-16T04:01:33,868 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@61b9ad9 2025-08-16T04:01:33,868 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done false 2025-08-16T04:01:33,877 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-toaster-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,885 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-default-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,889 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@75491de8 2025-08-16T04:01:33,889 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done false 2025-08-16T04:01:33,894 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7b387a83 2025-08-16T04:01:33,895 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done false 2025-08-16T04:01:33,897 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-default-config, lastLogIndex=78, lastLogTerm=2}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,899 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-topology-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,907 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@39789890 2025-08-16T04:01:33,908 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-08-16T04:01:33,908 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@271d63a7 2025-08-16T04:01:33,908 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done false 2025-08-16T04:01:33,909 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done false 2025-08-16T04:01:33,911 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type OPERATIONAL activated 2025-08-16T04:01:33,911 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL started 2025-08-16T04:01:33,917 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done true 2025-08-16T04:01:33,919 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Follower): Term 3 in "RequestVote{term=3, candidateId=member-1-shard-inventory-config, lastLogIndex=4, lastLogTerm=2}" message is greater than follower's term 2 - updating term 2025-08-16T04:01:33,928 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7e3400b4 2025-08-16T04:01:33,928 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-08-16T04:01:33,929 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done false 2025-08-16T04:01:33,931 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type CONFIGURATION activated 2025-08-16T04:01:33,933 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done true 2025-08-16T04:01:33,960 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiClusterAdmin | 193 - org.opendaylight.controller.sal-cluster-admin-impl - 11.0.0 | Cluster Admin services started 2025-08-16T04:01:33,971 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ConcurrentDOMDataBroker | 358 - org.opendaylight.yangtools.util - 14.0.14 | ThreadFactory created: CommitFutures 2025-08-16T04:01:33,973 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DataBrokerCommitExecutor | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker commit exector started 2025-08-16T04:01:33,975 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ConcurrentDOMDataBroker | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker started 2025-08-16T04:01:33,979 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for DataBroker activated 2025-08-16T04:01:33,979 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService)] 2025-08-16T04:01:34,028 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T04:01:34,029 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:01:34,042 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 12.83 ms 2025-08-16T04:01:34,053 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [Initial app config AaaCertServiceConfig] 2025-08-16T04:01:34,055 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiPasswordServiceConfigBootstrap | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | Listening for password service configuration 2025-08-16T04:01:34,055 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:34,060 | ERROR | opendaylight-cluster-data-notification-dispatcher-46 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | bundle org.opendaylight.aaa.idm-store-h2:0.21.0 (167)[org.opendaylight.aaa.datastore.h2.H2Store(5)] : Constructor argument 0 in class class org.opendaylight.aaa.datastore.h2.H2Store has unsupported type org.opendaylight.aaa.datastore.h2.ConnectionProvider 2025-08-16T04:01:34,062 | INFO | opendaylight-cluster-data-notification-dispatcher-46 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default iteration count=20000 2025-08-16T04:01:34,062 | INFO | opendaylight-cluster-data-notification-dispatcher-46 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default algorithm=SHA-512 2025-08-16T04:01:34,063 | INFO | opendaylight-cluster-data-notification-dispatcher-46 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will not utilize a private salt, since none was configured 2025-08-16T04:01:34,063 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:34,079 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EOSClusterSingletonServiceProvider | 257 - org.opendaylight.mdsal.mdsal-singleton-impl - 14.0.13 | Cluster Singleton Service started 2025-08-16T04:01:34,079 | INFO | opendaylight-cluster-data-notification-dispatcher-46 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | H2 IDMStore activated 2025-08-16T04:01:34,081 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config ShiroConfiguration, Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:01:34,083 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config ShiroConfiguration, Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-08-16T04:01:34,084 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer registered 2025-08-16T04:01:34,122 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T04:01:34,122 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:01:34,125 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 3.225 ms 2025-08-16T04:01:34,137 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ArbitratorReconciliationManagerImpl | 296 - org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl - 0.20.0 | ArbitratorReconciliationManager has started successfully. 2025-08-16T04:01:34,143 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done true 2025-08-16T04:01:34,152 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [Initial app config ShiroConfiguration, (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-08-16T04:01:34,167 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-08-16T04:01:34,170 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:01:34,195 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:01:34,201 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:01:34,207 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | DeviceOwnershipService started 2025-08-16T04:01:34,237 | INFO | Blueprint Extender: 3 | AaaCertMdsalProvider | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCertMdsalProvider Initialized 2025-08-16T04:01:34,255 | INFO | Blueprint Extender: 2 | LLDPSpeaker | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | LLDPSpeaker started, it will send LLDP frames each 5 seconds 2025-08-16T04:01:34,312 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DefaultConfigPusher | 301 - org.opendaylight.openflowplugin.applications.of-switch-config-pusher - 0.20.0 | DefaultConfigPusher has started. 2025-08-16T04:01:34,312 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:01:34,315 | INFO | Blueprint Extender: 3 | LazyBindingList | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for lists larger than 16 element(s) 2025-08-16T04:01:34,320 | INFO | Blueprint Extender: 2 | NodeConnectorInventoryEventTranslator | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | NodeConnectorInventoryEventTranslator has started. 2025-08-16T04:01:34,326 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 has been started 2025-08-16T04:01:34,330 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.lldp-speaker_0.20.0 [300] was successfully created 2025-08-16T04:01:34,337 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:01:34,349 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | FlowCapableTopologyProvider | 304 - org.opendaylight.openflowplugin.applications.topology-manager - 0.20.0 | Topology Manager service started. 2025-08-16T04:01:34,383 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done true 2025-08-16T04:01:34,404 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done true 2025-08-16T04:01:34,412 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading properties from '(urn:opendaylight:params:xml:ns:yang:openflow:provider:config?revision=2016-05-10)openflow-provider-config' YANG file 2025-08-16T04:01:34,425 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done true 2025-08-16T04:01:34,427 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | rpc-requests-quota configuration property was changed to '20000' 2025-08-16T04:01:34,445 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | global-notification-quota configuration property was changed to '64000' 2025-08-16T04:01:34,445 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | switch-features-mandatory configuration property was changed to 'false' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | enable-flow-removed-notification configuration property was changed to 'true' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-rpc-enabled configuration property was changed to 'false' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-count-limit configuration property was changed to '25600' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-interval-timeout-limit configuration property was changed to '500' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | echo-reply-timeout configuration property was changed to '2000' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-table-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-flow-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-group-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,446 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-meter-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-port-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-queue-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | skip-table-features configuration property was changed to 'true' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | basic-timer-delay configuration property was changed to '3000' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | maximum-timer-delay configuration property was changed to '900000' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | use-single-layer-serialization configuration property was changed to 'true' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-min-threads configuration property was changed to '1' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-max-threads configuration property was changed to '32000' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-timeout configuration property was changed to '60' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-rate-limit-per-min configuration property was changed to '0' 2025-08-16T04:01:34,447 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-hold-time-in-seconds configuration property was changed to '0' 2025-08-16T04:01:34,448 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-datastore-removal-delay configuration property was changed to '500' 2025-08-16T04:01:34,448 | INFO | Blueprint Extender: 2 | OSGiConfigurationServiceFactory | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading configuration from 'org.opendaylight.openflowplugin' configuration file 2025-08-16T04:01:34,458 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | MD-SAL configuration-based SwitchConnectionProviders started 2025-08-16T04:01:34,459 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | felix.fileinstall.filename configuration property was changed to 'file:/tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg' 2025-08-16T04:01:34,460 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | service.pid configuration property was changed to 'org.opendaylight.openflowplugin' 2025-08-16T04:01:34,461 | INFO | opendaylight-cluster-data-notification-dispatcher-47 | AAAEncryptionServiceImpl | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | AAAEncryptionService activated 2025-08-16T04:01:34,466 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | Certificate Manager service has been initialized 2025-08-16T04:01:34,469 | INFO | opendaylight-cluster-data-notification-dispatcher-47 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Encryption Service enabled 2025-08-16T04:01:34,472 | INFO | Blueprint Extender: 3 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCert Rpc Service has been initialized 2025-08-16T04:01:34,473 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 has been started 2025-08-16T04:01:34,474 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 has been started 2025-08-16T04:01:34,476 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-default-impl' 2025-08-16T04:01:34,478 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.impl_0.20.0 [309] was successfully created 2025-08-16T04:01:34,481 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.cert_0.21.0 [163] was successfully created 2025-08-16T04:01:34,515 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-08-16T04:01:34,545 | INFO | Blueprint Extender: 3 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Checking if default entries must be created in IDM store 2025-08-16T04:01:34,555 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.topology.lldp.LLDPLinkAger@389d97ac was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T04:01:34,560 | INFO | Blueprint Extender: 1 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.frm.impl.ForwardingRulesManagerImpl@2aa228aa was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T04:01:34,586 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] 2025-08-16T04:01:34,589 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] 2025-08-16T04:01:34,590 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION started 2025-08-16T04:01:34,596 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] already present 2025-08-16T04:01:34,597 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] already present 2025-08-16T04:01:34,648 | INFO | Blueprint Extender: 2 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | Starting LLDPActivator with lldpSecureKey: aa9251f8-c7c0-4322-b8d6-c3a84593bda3 2025-08-16T04:01:34,650 | INFO | Blueprint Extender: 2 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | LLDPDiscoveryListener started. 2025-08-16T04:01:34,651 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 has been started 2025-08-16T04:01:34,657 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery_0.20.0 [303] was successfully created 2025-08-16T04:01:34,737 | INFO | Blueprint Extender: 1 | ForwardingRulesManagerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | ForwardingRulesManager has started successfully. 2025-08-16T04:01:34,740 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 has been started 2025-08-16T04:01:34,741 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager_0.20.0 [299] was successfully created 2025-08-16T04:01:34,875 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:01:34,877 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:01:34,877 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 726.9 μs 2025-08-16T04:01:34,893 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done true 2025-08-16T04:01:34,893 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done true 2025-08-16T04:01:34,945 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | OpenFlowPluginProvider started, waiting for onSystemBootReady() 2025-08-16T04:01:34,946 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@4c406900 2025-08-16T04:01:34,951 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OnfExtensionProvider | 308 - org.opendaylight.openflowplugin.extension-onf - 0.20.0 | ONF Extension Provider started. 2025-08-16T04:01:34,951 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-legacy-impl' 2025-08-16T04:01:34,954 | INFO | opendaylight-cluster-data-notification-dispatcher-48 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@4f0eb253 2025-08-16T04:01:34,955 | INFO | Blueprint Extender: 3 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Found default domain in IDM store, skipping insertion of default data 2025-08-16T04:01:34,956 | INFO | Blueprint Extender: 3 | AAAShiroProvider | 172 - org.opendaylight.aaa.shiro - 0.21.0 | AAAShiroProvider Session Initiated 2025-08-16T04:01:35,000 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:01:35,001 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:01:35,055 | INFO | Blueprint Extender: 3 | IniSecurityManagerFactory | 171 - org.opendaylight.aaa.repackaged-shiro - 0.21.0 | Realms have been explicitly set on the SecurityManager instance - auto-setting of realms will not occur. 2025-08-16T04:01:35,083 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-08-16T04:01:35,083 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=299, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}", size=2} 2025-08-16T04:01:35,083 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-08-16T04:01:35,084 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=299, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@29515e5d{/auth,null,STOPPED} 2025-08-16T04:01:35,085 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@29515e5d{/auth,null,STOPPED} 2025-08-16T04:01:35,093 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:01:35,094 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-08-16T04:01:35,094 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:01:35,095 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/auth" with default Osgi Context OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=299, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} 2025-08-16T04:01:35,098 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T04:01:35,099 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T04:01:35,099 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@29515e5d{/auth,null,AVAILABLE} 2025-08-16T04:01:35,099 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=299, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}} as OSGi service for "/auth" context path 2025-08-16T04:01:35,100 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:01:35,103 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.aaa.shiro_0.21.0 [172] registered context path /auth with 4 service(s) 2025-08-16T04:01:35,106 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:01:35,106 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-14,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-08-16T04:01:35,106 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:01:35,106 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:01:35,107 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:01:35,108 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=1} 2025-08-16T04:01:35,108 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-15,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:01:35,142 | ERROR | Blueprint Extender: 3 | MdsalRestconfServer | 279 - org.opendaylight.netconf.restconf-server-mdsal - 9.0.0 | bundle org.opendaylight.netconf.restconf-server-mdsal:9.0.0 (279)[org.opendaylight.restconf.server.mdsal.MdsalRestconfServer(69)] : Constructor argument 5 in class class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer has unsupported type [Lorg.opendaylight.restconf.server.spi.RpcImplementation; 2025-08-16T04:01:35,211 | INFO | Blueprint Extender: 3 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278]] 2025-08-16T04:01:35,212 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-08-16T04:01:35,212 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}", size=2} 2025-08-16T04:01:35,212 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-08-16T04:01:35,213 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@2c37b24c{/rests,null,STOPPED} 2025-08-16T04:01:35,214 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@2c37b24c{/rests,null,STOPPED} 2025-08-16T04:01:35,218 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:01:35,218 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-08-16T04:01:35,218 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:01:35,219 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:01:35,219 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/rests" with default Osgi Context OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} 2025-08-16T04:01:35,220 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T04:01:35,220 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T04:01:35,220 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@2c37b24c{/rests,null,AVAILABLE} 2025-08-16T04:01:35,220 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}} as OSGi service for "/rests" context path 2025-08-16T04:01:35,221 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:01:35,224 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-21,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:01:35,224 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-21,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-08-16T04:01:35,224 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:01:35,224 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:01:35,225 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:01:35,226 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:01:35,226 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=1} 2025-08-16T04:01:35,226 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-22,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:01:35,226 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /rests with 4 service(s) 2025-08-16T04:01:35,227 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-25,contextPath='/.well-known'} 2025-08-16T04:01:35,227 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}", size=2} 2025-08-16T04:01:35,227 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-25,contextPath='/.well-known'} 2025-08-16T04:01:35,228 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@323220ee{/.well-known,null,STOPPED} 2025-08-16T04:01:35,229 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@323220ee{/.well-known,null,STOPPED} 2025-08-16T04:01:35,230 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-08-16T04:01:35,230 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-26,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]}", size=2} 2025-08-16T04:01:35,230 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:01:35,230 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:01:35,230 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-08-16T04:01:35,231 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/.well-known" with default Osgi Context OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} 2025-08-16T04:01:35,232 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@323220ee{/.well-known,null,AVAILABLE} 2025-08-16T04:01:35,232 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-23,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=316, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}} as OSGi service for "/.well-known" context path 2025-08-16T04:01:35,233 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:01:35,234 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-08-16T04:01:35,234 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]}", size=1} 2025-08-16T04:01:35,234 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-28,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-23,WellKnownURIs,/.well-known}]} 2025-08-16T04:01:35,234 | INFO | Blueprint Extender: 3 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /.well-known with 3 service(s) 2025-08-16T04:01:35,236 | INFO | Blueprint Extender: 3 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | Binding URL provider org.opendaylight.restconf.server.jaxrs.JaxRsYangLibrary@3cdd7411 2025-08-16T04:01:35,259 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 2025-08-16T04:01:35,260 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 2025-08-16T04:01:35,261 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 2025-08-16T04:01:35,261 | INFO | Blueprint Extender: 3 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 2025-08-16T04:01:35,286 | INFO | Blueprint Extender: 3 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with service class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer 2025-08-16T04:01:35,286 | INFO | Blueprint Extender: 3 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with base path: /restconf, default encoding: JSON, default pretty print: false 2025-08-16T04:01:35,341 | INFO | Blueprint Extender: 3 | OSGiNorthbound | 275 - org.opendaylight.netconf.restconf-nb - 9.0.0 | Global RESTCONF northbound pools started 2025-08-16T04:01:35,342 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 has been started 2025-08-16T04:01:35,343 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.shiro_0.21.0 [172] was successfully created 2025-08-16T04:01:35,656 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos: Elapsed time 15s, remaining time 284s, diag: Active {INSTALLED=0, RESOLVED=10, UNKNOWN=0, GRACE_PERIOD=0, WAITING=0, STARTING=0, ACTIVE=399, STOPPING=0, FAILURE=0} 2025-08-16T04:01:35,657 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] 2025-08-16T04:01:35,657 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | Now notifying all its registered SystemReadyListeners... 2025-08-16T04:01:35,657 | INFO | SystemReadyService-0 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | onSystemBootReady() received, starting the switch connections 2025-08-16T04:01:35,773 | INFO | epollEventLoopGroup-4-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T04:01:35,774 | INFO | epollEventLoopGroup-4-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T04:01:35,774 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@4f0eb253 started 2025-08-16T04:01:35,778 | INFO | epollEventLoopGroup-2-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T04:01:35,778 | INFO | epollEventLoopGroup-2-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T04:01:35,780 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@4c406900 started 2025-08-16T04:01:35,780 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | All switchConnectionProviders are up and running (2). 2025-08-16T04:01:46,381 | INFO | qtp230590180-360 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication is now enabled 2025-08-16T04:01:46,382 | INFO | qtp230590180-360 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication Manager activated 2025-08-16T04:01:48,099 | INFO | qtp230590180-358 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T04:01:48,102 | INFO | qtp230590180-358 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T04:01:48,336 | INFO | qtp230590180-358 | ApiPathParser | 273 - org.opendaylight.netconf.restconf-api - 9.0.0 | Consecutive slashes in REST URLs will be rejected 2025-08-16T04:01:52,700 | INFO | sshd-SshServer[34b7cc64](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:36378 authenticated 2025-08-16T04:01:53,290 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Cluster Restart 2025-08-16T04:03:05,131 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.196:2550: 2867 millis 2025-08-16T04:03:55,290 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2220 millis 2025-08-16T04:04:04,575 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 4436 millis 2025-08-16T04:04:31,660 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2011 millis 2025-08-16T04:07:45,117 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node1 2025-08-16T04:07:45,690 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Cluster Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Cluster Restart 2025-08-16T04:07:46,139 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node1 2025-08-16T04:07:46,576 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node1" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node1 2025-08-16T04:07:46,971 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster 2025-08-16T04:07:47,409 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Leader Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Leader Before Leader Restart 2025-08-16T04:07:48,585 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Leader 2025-08-16T04:07:51,750 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Leader 2025-08-16T04:07:52,044 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:07:52,068 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:07:52,148 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:07:52,260 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Leader Restart 2025-08-16T04:07:52,347 | INFO | opendaylight-cluster-data-notification-dispatcher-52 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T04:07:52,573 | WARN | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Seems like device is still owned by other controller instance. Skip deleting openflow:1 node from operational datastore. 2025-08-16T04:07:52,693 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Leader Restart 2025-08-16T04:09:34,960 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Leader From Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Leader From Cluster Node 2025-08-16T04:09:35,145 | INFO | pipe-log:log "ROBOT MESSAGE: Killing ODL1 10.30.170.62" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Killing ODL1 10.30.170.62 2025-08-16T04:09:40,922 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Marking node as UNREACHABLE [Member(pekko://opendaylight-cluster-data@10.30.170.62:2550, Up)]. 2025-08-16T04:09:40,927 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:40,928 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Received UnreachableMember: memberName MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:40,928 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:09:40,928 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:09:40,930 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: refreshing backend for shard 0 2025-08-16T04:09:40,930 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: refreshing backend for shard 0 2025-08-16T04:09:40,930 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:09:40,930 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: refreshing backend for shard 1 2025-08-16T04:09:43,120 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-1309554401] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:09:43,121 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.GossipStatus] from Actor[pekko://opendaylight-cluster-data/system/cluster/core/daemon#-1605187417] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [7] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:09:43,121 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.remote.rpc.registry.gossip.GossipStatus] from Actor[pekko://opendaylight-cluster-data/user/rpc/registry/gossiper#1131532458] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [8] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:09:43,121 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-1309554401] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [9] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:09:43,123 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RemoteActorRefProvider$RemoteDeadLetterActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.apache.pekko.cluster.ddata.Replicator$Internal$Status] from Actor[pekko://opendaylight-cluster-data/system/ddataReplicator#-1309554401] to Actor[pekko://opendaylight-cluster-data/deadLetters] was not delivered. [10] dead letters encountered, no more dead letters will be logged in next [5.000 min]. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:09:43,388 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.170.62:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.170.62/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-08-16T04:09:45,382 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Follower): Term 4 in "RequestVote{term=4, candidateId=member-2-shard-inventory-config, lastLogIndex=9, lastLogTerm=3}" message is greater than follower's term 3 - updating term 2025-08-16T04:09:45,398 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@35cf427 2025-08-16T04:09:45,399 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done false 2025-08-16T04:09:45,401 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done true 2025-08-16T04:09:45,468 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Leader pekko://opendaylight-cluster-data@10.30.170.62:2550 is unreachable 2025-08-16T04:09:45,476 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate): Starting new election term 4 2025-08-16T04:09:45,477 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-08-16T04:09:45,478 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@71c780b4 2025-08-16T04:09:45,478 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from Follower to Candidate 2025-08-16T04:09:45,478 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from Follower to Candidate 2025-08-16T04:09:45,544 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config (Follower): Term 4 in "RequestVote{term=4, candidateId=member-2-shard-topology-config, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 3 - updating term 2025-08-16T04:09:45,551 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done false 2025-08-16T04:09:45,551 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6dc4e2cc 2025-08-16T04:09:45,556 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Follower): Leader pekko://opendaylight-cluster-data@10.30.170.62:2550 is unreachable 2025-08-16T04:09:45,562 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Candidate): Starting new election term 4 2025-08-16T04:09:45,562 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-08-16T04:09:45,563 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@323e4420 2025-08-16T04:09:45,563 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from Follower to Candidate 2025-08-16T04:09:45,563 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from Follower to Candidate 2025-08-16T04:09:45,576 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 4 2025-08-16T04:09:45,577 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@23576ebe 2025-08-16T04:09:45,578 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from Candidate to Leader 2025-08-16T04:09:45,578 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from Candidate to Leader 2025-08-16T04:09:45,583 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational (Follower): Term 4 in "RequestVote{term=4, candidateId=member-2-shard-toaster-operational, lastLogIndex=-1, lastLogTerm=-1}" message is greater than follower's term 3 - updating term 2025-08-16T04:09:45,591 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@164a6865 2025-08-16T04:09:45,591 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done false 2025-08-16T04:09:45,747 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): Leader pekko://opendaylight-cluster-data@10.30.170.62:2550 is unreachable 2025-08-16T04:09:45,750 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Candidate): Starting new election term 4 2025-08-16T04:09:45,751 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-08-16T04:09:45,752 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@f0bd38c 2025-08-16T04:09:45,752 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from Follower to Candidate 2025-08-16T04:09:45,753 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from Follower to Candidate 2025-08-16T04:09:45,761 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 4 2025-08-16T04:09:45,762 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@353ddab1 2025-08-16T04:09:45,764 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from Candidate to Leader 2025-08-16T04:09:45,765 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from Candidate to Leader 2025-08-16T04:09:45,767 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): Leader pekko://opendaylight-cluster-data@10.30.170.62:2550 is unreachable 2025-08-16T04:09:45,769 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Candidate): Starting new election term 4 2025-08-16T04:09:45,770 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-08-16T04:09:45,770 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@48772297 2025-08-16T04:09:45,770 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from Follower to Candidate 2025-08-16T04:09:45,771 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from Follower to Candidate 2025-08-16T04:09:45,778 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 4 2025-08-16T04:09:45,778 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from Candidate to Leader 2025-08-16T04:09:45,778 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@2bbea155 2025-08-16T04:09:45,779 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from Candidate to Leader 2025-08-16T04:09:45,781 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-default-operational#-1610344384], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present} 2025-08-16T04:09:45,782 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-default-operational#-1610344384], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} 2025-08-16T04:09:45,784 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#981983360], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-default-operational#-1610344384], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} in 2.911 ms 2025-08-16T04:09:45,827 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Follower): Leader pekko://opendaylight-cluster-data@10.30.170.62:2550 is unreachable 2025-08-16T04:09:45,833 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate): Starting new election term 4 2025-08-16T04:09:45,833 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Follower) :- Switching from behavior Follower to Candidate, election term: 4 2025-08-16T04:09:45,834 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@171666f6 2025-08-16T04:09:45,834 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from Follower to Candidate 2025-08-16T04:09:45,835 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from Follower to Candidate 2025-08-16T04:09:46,068 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done true 2025-08-16T04:09:46,108 | INFO | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done true 2025-08-16T04:09:48,969 | INFO | opendaylight-cluster-data-shard-dispatcher-40 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberRemoved: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:48,969 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberRemoved: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:48,972 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Younger observed OldestChanged: [Some(pekko://opendaylight-cluster-data@10.30.170.62:2550) -> Some(pekko://opendaylight-cluster-data@10.30.170.196:2550)] 2025-08-16T04:09:48,973 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | Association | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Association to [pekko://opendaylight-cluster-data@10.30.170.62:2550] with UID [-700191683640238917] is irrecoverably failed. UID is now quarantined and all messages to this UID will be delivered to dead letters. Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down] 2025-08-16T04:09:49,117 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node 2025-08-16T04:09:50,968 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Materializer | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | [outbound connection to [pekko://opendaylight-cluster-data@10.30.170.62:2550], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.30.170.62/:2550,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2025-08-16T04:09:51,651 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Leader from Cluster Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Restart Leader from Cluster Node 2025-08-16T04:09:51,790 | INFO | pipe-log:log "ROBOT MESSAGE: Starting ODL1 10.30.170.62" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting ODL1 10.30.170.62 2025-08-16T04:09:55,524 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate): Term 5 in "RequestVote{term=5, candidateId=member-2-shard-inventory-operational, lastLogIndex=356, lastLogTerm=3}" message is greater than Candidate's term 4 - switching to Follower 2025-08-16T04:09:55,530 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Candidate) :- Switching from behavior Candidate to Follower, election term: 5 2025-08-16T04:09:55,530 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from Candidate to Follower 2025-08-16T04:09:55,531 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from Candidate to Follower 2025-08-16T04:09:55,532 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@6db90278 2025-08-16T04:09:55,532 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-08-16T04:09:55,533 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done false 2025-08-16T04:09:55,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done true 2025-08-16T04:09:55,540 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:09:55,540 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:09:55,541 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational#470147873], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=3, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 493.7 μs 2025-08-16T04:09:55,757 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:09:55,757 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:09:55,909 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate): Starting new election term 5 2025-08-16T04:09:55,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 5 2025-08-16T04:09:55,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@4a1ff458 2025-08-16T04:09:55,918 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from Candidate to Leader 2025-08-16T04:09:55,919 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from Candidate to Leader 2025-08-16T04:09:55,920 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-08-16T04:09:55,921 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-default-config#-362519020], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present} 2025-08-16T04:09:55,921 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-default-config#-362519020], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} 2025-08-16T04:09:55,922 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#1460049693], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-default-config#-362519020], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=present}} in 573.3 μs 2025-08-16T04:09:55,987 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Member removed [pekko://opendaylight-cluster-data@10.30.170.62:2550] 2025-08-16T04:09:56,110 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data@10.30.170.196:2550/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-08-16T04:09:58,476 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#-1959739350]] to [pekko://opendaylight-cluster-data@10.30.170.30:2550] 2025-08-16T04:09:58,477 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.30:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#-1959739350]] (version [1.0.3]) 2025-08-16T04:09:58,511 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoin message from [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#-1959739350]] to [pekko://opendaylight-cluster-data@10.30.170.30:2550] 2025-08-16T04:09:58,511 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Sending InitJoinAck message from node [pekko://opendaylight-cluster-data@10.30.170.30:2550] to [Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/system/cluster/core/daemon/firstSeedNodeProcess-1#-1959739350]] (version [1.0.3]) 2025-08-16T04:09:58,516 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:09:58,517 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:09:59,162 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:59,163 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:09:59,163 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:09:59,163 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:09:59,163 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:09:59,163 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-1-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-1-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-1-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-1-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-1-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-1-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-1-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-08-16T04:09:59,164 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-1-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:10:02,405 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Leader): handleAppendEntriesReply - received unsuccessful reply: AppendEntriesReply{term=4, success=false, followerId=member-1-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, leader snapshotIndex: 4, snapshotTerm: 3, replicatedToAllIndex: -1 2025-08-16T04:10:02,405 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Leader): follower member-1-shard-topology-operational appears to be behind the leader from the last snapshot - updated: matchIndex: -1, nextIndex: 0 2025-08-16T04:10:02,406 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Leader): Initiating install snapshot to follower member-1-shard-topology-operational: follower nextIndex: 0, leader snapshotIndex: 4, leader lastIndex: 9, leader log size: 5 2025-08-16T04:10:02,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Initiating snapshot capture CaptureSnapshot [lastAppliedIndex=9, lastAppliedTerm=4, lastIndex=9, lastTerm=4, installSnapshotInitiated=, replicatedToAllIndex=-1, replicatedToAllTerm=-1, unAppliedEntries size=0, mandatoryTrim=false] to install on member-1-shard-topology-operational 2025-08-16T04:10:02,417 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:10:02,417 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-27 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:10:02,419 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Persising snapshot at EntryInfo[index=9, term=4]/EntryInfo[index=9, term=4] 2025-08-16T04:10:02,420 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Removed in-memory snapshotted entries, adjusted snapshotIndex: 4 and term: 3 2025-08-16T04:10:02,423 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Leader): handleAppendEntriesReply - received unsuccessful reply: AppendEntriesReply{term=4, success=false, followerId=member-1-shard-default-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5}, leader snapshotIndex: 39, snapshotTerm: 3, replicatedToAllIndex: -1 2025-08-16T04:10:02,423 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Leader): follower member-1-shard-default-operational appears to be behind the leader from the last snapshot - updated: matchIndex: -1, nextIndex: 0 2025-08-16T04:10:02,423 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Leader): Initiating install snapshot to follower member-1-shard-default-operational: follower nextIndex: 0, leader snapshotIndex: 39, leader lastIndex: 43, leader log size: 4 2025-08-16T04:10:02,423 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Initiating snapshot capture CaptureSnapshot [lastAppliedIndex=43, lastAppliedTerm=4, lastIndex=43, lastTerm=4, installSnapshotInitiated=, replicatedToAllIndex=-1, replicatedToAllTerm=-1, unAppliedEntries size=0, mandatoryTrim=false] to install on member-1-shard-default-operational 2025-08-16T04:10:02,432 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Persising snapshot at EntryInfo[index=43, term=4]/EntryInfo[index=43, term=4] 2025-08-16T04:10:02,433 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Removed in-memory snapshotted entries, adjusted snapshotIndex: 39 and term: 3 2025-08-16T04:10:02,436 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: snapshot is durable as of 2025-08-16T04:10:02.420299500Z 2025-08-16T04:10:02,437 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: snapshot is durable as of 2025-08-16T04:10:02.433001597Z 2025-08-16T04:10:02,493 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Leader): Snapshot successfully installed on follower member-1-shard-topology-operational (last chunk 1) - matchIndex set to 9, nextIndex set to 10 2025-08-16T04:10:02,526 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Leader): Snapshot successfully installed on follower member-1-shard-default-operational (last chunk 1) - matchIndex set to 43, nextIndex set to 44 2025-08-16T04:10:02,693 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-config: retiring state Enabled{clientId=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=1}, nanosAgo=6774154235, purgedHistories=MutableUnsignedLongSet{size=0}}, outdated by request from client ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2} 2025-08-16T04:10:02,866 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: retiring state Enabled{clientId=ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=1}, nanosAgo=17087547045, purgedHistories=MutableUnsignedLongSet{size=0}}, outdated by request from client ClientIdentifier{frontend=member-1-frontend-datastore-operational, generation=2} 2025-08-16T04:10:02,932 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T04:10:03,653 | WARN | opendaylight-cluster-data-shard-dispatcher-45 | ShardDataTree | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Store Tx member-1-datastore-operational-fe-2-txn-3-0: Conflicting modification for path /(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)streams/stream/stream[{(urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications?revision=2019-09-09)name=NETCONF}]. 2025-08-16T04:10:15,737 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Leader Restart 2025-08-16T04:16:07,548 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Leader" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Leader 2025-08-16T04:16:10,826 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Leader Restart 2025-08-16T04:16:11,067 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:16:11,307 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-9 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:16:11,693 | INFO | opendaylight-cluster-data-notification-dispatcher-62 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T04:17:51,743 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Leader Node After Leader Restart 2025-08-16T04:17:52,058 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:17:52,058 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:17:52,565 | INFO | node-cleaner-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T04:17:54,432 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Leader Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Leader Node 2025-08-16T04:17:54,863 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Leader Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Leader Restart 2025-08-16T04:17:55,294 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Inventory Follower Before follower Restart 2025-08-16T04:17:56,440 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Connect To Follower Node2 2025-08-16T04:17:58,786 | INFO | epollEventLoopGroup-5-1 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.170.99:57830, NodeId:null 2025-08-16T04:17:58,831 | INFO | epollEventLoopGroup-5-2 | ConnectionAdapterImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Hello received 2025-08-16T04:17:59,222 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Add Bulk Flow From Follower Node2 2025-08-16T04:17:59,332 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connected. 2025-08-16T04:17:59,332 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | No context chain found for device: openflow:1, creating new. 2025-08-16T04:17:59,332 | INFO | epollEventLoopGroup-5-2 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Device connected to controller, Device:/10.30.170.99:57838, NodeId:Uri{value=openflow:1} 2025-08-16T04:17:59,353 | INFO | epollEventLoopGroup-5-2 | RoleContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started timer for setting SLAVE role on device openflow:1 if no role will be set in 20s. 2025-08-16T04:17:59,638 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T04:17:59,696 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Get Bulk Flows and Verify In Cluster Before Follower Restart 2025-08-16T04:17:59,956 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-29 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T04:17:59,957 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting DeviceContextImpl[NEW] service for node openflow:1 2025-08-16T04:17:59,964 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RpcContextImpl[NEW] service for node openflow:1 2025-08-16T04:17:59,989 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting StatisticsContextImpl[NEW] service for node openflow:1 2025-08-16T04:17:59,990 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RoleContextImpl[NEW] service for node openflow:1 2025-08-16T04:17:59,997 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | SetRole called with input:SetRoleInput{controllerRole=BECOMEMASTER, node=NodeRef{value=DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] ]}} 2025-08-16T04:17:59,998 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Requesting state change to BECOMEMASTER 2025-08-16T04:17:59,998 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | RoleChangeTask called on device:openflow:1 OFPRole:BECOMEMASTER 2025-08-16T04:17:59,998 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | getGenerationIdFromDevice called for device: openflow:1 2025-08-16T04:18:00,010 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange called for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T04:18:00,012 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange onSuccess for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T04:18:00,012 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started clustering services for node openflow:1 2025-08-16T04:18:00,018 | INFO | ofppool-0 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Triggering reconciliation for device NodeKey{id=Uri{value=openflow:1}} 2025-08-16T04:18:00,027 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:18:00,028 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:18:00,029 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-32 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=1}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 595.9 μs 2025-08-16T04:18:00,037 | INFO | pool-20-thread-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connection is enabled by reconciliation framework. 2025-08-16T04:18:00,068 | INFO | epollEventLoopGroup-5-2 | DeviceInitializationUtil | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | IP address of the node openflow:1 is: IpAddress{ipv4Address=Ipv4Address{value=10.30.170.99}} 2025-08-16T04:18:00,069 | INFO | epollEventLoopGroup-5-2 | DeviceInitializationUtil | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Port number of the node openflow:1 is: 57838 2025-08-16T04:18:00,125 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Follower Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch Before Follower Restart 2025-08-16T04:18:00,300 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPMETERFEATURES collected 2025-08-16T04:18:00,305 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPGROUPFEATURES collected 2025-08-16T04:18:00,326 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.MacAddress 2025-08-16T04:18:00,327 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.PhysAddress 2025-08-16T04:18:00,327 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.HexString 2025-08-16T04:18:00,328 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.DottedQuad 2025-08-16T04:18:00,328 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.Uuid 2025-08-16T04:18:00,330 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPPORTDESC collected 2025-08-16T04:18:00,357 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 successfully finished collecting 2025-08-16T04:18:00,400 | INFO | pool-20-thread-1 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 is able to work as master 2025-08-16T04:18:00,401 | INFO | pool-20-thread-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Role MASTER was granted to device openflow:1 2025-08-16T04:18:00,402 | INFO | pool-20-thread-1 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Publishing node added notification for Uri{value=openflow:1} 2025-08-16T04:18:00,403 | INFO | opendaylight-cluster-data-notification-dispatcher-62 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T04:18:00,405 | INFO | pool-20-thread-1 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting statistics gathering for node openflow:1 2025-08-16T04:18:00,414 | INFO | opendaylight-cluster-data-notification-dispatcher-63 | StaticConfiguration | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding-over-DOM codec shortcuts are enabled 2025-08-16T04:18:00,429 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | LazyBindingMap | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for maps larger than 1 element(s) 2025-08-16T04:18:09,579 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=3, txSequence=3e, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-1-txn-13-1, sequence=1}} found, ignoring response 2025-08-16T04:18:27,832 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=3, txSequence=88, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-1-txn-19-1, sequence=1}} found, ignoring response 2025-08-16T04:18:58,170 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-30 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=3, txSequence=102, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-1-txn-29-1, sequence=1}} found, ignoring response 2025-08-16T04:19:25,444 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-28 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=3, txSequence=170, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-1-txn-38-1, sequence=1}} found, ignoring response 2025-08-16T04:19:42,408 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Kill Follower Node2 2025-08-16T04:19:42,559 | INFO | pipe-log:log "ROBOT MESSAGE: Killing ODL3 10.30.170.30" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Killing ODL3 10.30.170.30 Aug 16, 2025 4:20:01 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Trying to lock /tmp/karaf-0.23.0/lock Aug 16, 2025 4:20:01 AM org.apache.karaf.main.lock.SimpleFileLock lock INFO: Lock acquired Aug 16, 2025 4:20:01 AM org.apache.karaf.main.Main$KarafLockCallback lockAcquired INFO: Lock acquired. Setting startlevel to 100 2025-08-16T04:20:02,323 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.logging]) | EventAdminConfigurationNotifier | 4 - org.ops4j.pax.logging.pax-logging-log4j2 - 2.2.8 | Logging configuration changed. (Event Admin service unavailable - no notification sent). 2025-08-16T04:20:02,323 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.aries.blueprint.core/1.10.3 has been started 2025-08-16T04:20:02,443 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Starting JMX OSGi agent 2025-08-16T04:20:02,465 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4] for service with service.id [15] 2025-08-16T04:20:02,466 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4] for service with service.id [40] 2025-08-16T04:20:02,494 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Starting with globalExtender setting: false 2025-08-16T04:20:02,502 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | ROOT | 93 - org.apache.felix.scr - 2.2.6 | bundle org.apache.felix.scr:2.2.6 (93) Version = 2.2.6 2025-08-16T04:20:02,625 | INFO | activator-1-thread-1 | Activator | 113 - org.apache.karaf.management.server - 4.4.7 | Setting java.rmi.server.hostname system property to 127.0.0.1 2025-08-16T04:20:02,701 | INFO | activator-1-thread-2 | Activator | 99 - org.apache.karaf.deployer.features - 4.4.7 | Deployment finished. Registering FeatureDeploymentListener 2025-08-16T04:20:02,727 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,728 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.permissionadmin.PermissionAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,728 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,728 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,729 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:type=wiringState,version=1.1,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,730 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.FrameworkMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,730 | INFO | activator-1-thread-1 | core | 83 - org.apache.aries.jmx.core - 1.1.8 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@39b58c93 with name osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=5f8e399e-77e6-4df2-a109-ed1d528458c4 2025-08-16T04:20:02,780 | INFO | activator-1-thread-1 | ServiceComponentRuntimeMBeanImpl | 115 - org.apache.karaf.scr.management - 4.4.7 | Activating the Apache Karaf ServiceComponentRuntime MBean 2025-08-16T04:20:02,837 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.bundle.core/4.4.7 2025-08-16T04:20:02,845 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.config.command/4.4.7 2025-08-16T04:20:02,915 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.deployer.kar/4.4.7 2025-08-16T04:20:02,916 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.diagnostic.core/4.4.7 2025-08-16T04:20:02,928 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.features.command/4.4.7 2025-08-16T04:20:02,931 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.http.core/4.4.7. Missing service: [org.apache.karaf.http.core.ProxyService] 2025-08-16T04:20:02,938 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.instance.core/4.4.7 2025-08-16T04:20:02,946 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:20:02,948 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:20:02,948 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.jaas.command/4.4.7 2025-08-16T04:20:02,951 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.kar.core/4.4.7 2025-08-16T04:20:02,956 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.log.core/4.4.7 2025-08-16T04:20:02,958 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.package.core/4.4.7 2025-08-16T04:20:02,959 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.service.core/4.4.7 2025-08-16T04:20:02,966 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T04:20:02,967 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Updating commands for bundle org.apache.karaf.shell.commands/4.4.7 2025-08-16T04:20:02,969 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Activator | 120 - org.apache.karaf.shell.core - 4.4.7 | Not starting local console. To activate set karaf.startLocalConsole=true 2025-08-16T04:20:03,000 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.apache.karaf.shell.core/4.4.7 has been started 2025-08-16T04:20:03,059 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.shell.ssh/4.4.7. Missing service: [org.apache.sshd.server.SshServer] 2025-08-16T04:20:03,103 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.system.core/4.4.7 2025-08-16T04:20:03,138 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.apache.karaf.web.core/4.4.7. Missing service: [org.apache.karaf.web.WebContainerService] 2025-08-16T04:20:03,207 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Activator | 392 - org.ops4j.pax.web.pax-web-extender-war - 8.0.30 | Configuring WAR extender thread pool. Pool size = 3 2025-08-16T04:20:03,236 | INFO | activator-1-thread-1 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.shell.ssh/4.4.7 2025-08-16T04:20:03,244 | INFO | activator-1-thread-1 | DefaultIoServiceFactoryFactory | 125 - org.apache.sshd.osgi - 2.14.0 | No detected/configured IoServiceFactoryFactory; using Nio2ServiceFactoryFactory 2025-08-16T04:20:03,325 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Activator | 393 - org.ops4j.pax.web.pax-web-extender-whiteboard - 8.0.30 | Starting Pax Web Whiteboard Extender 2025-08-16T04:20:03,373 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | log | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Logging initialized @3336ms to org.eclipse.jetty.util.log.Slf4jLog 2025-08-16T04:20:03,391 | INFO | CM Configuration Updater (ManagedService Update: pid=[org.ops4j.pax.web]) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because configuration has changed 2025-08-16T04:20:03,392 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | EventAdmin support enabled, WAB events will be posted to EventAdmin topics. 2025-08-16T04:20:03,392 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Pax Web Runtime started 2025-08-16T04:20:03,394 | INFO | paxweb-config-3-thread-1 (change config) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Scheduling Pax Web reconfiguration because ServerControllerFactory has been registered 2025-08-16T04:20:03,407 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Starting BlueprintBundleTracker 2025-08-16T04:20:03,424 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.karaf.shell.core_4.4.7 [120] was successfully created 2025-08-16T04:20:03,426 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.cm_1.3.2 [78] was successfully created 2025-08-16T04:20:03,427 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.apache.aries.blueprint.core_1.10.3 [79] was successfully created 2025-08-16T04:20:03,453 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Configuring server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T04:20:03,454 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Configuring JettyServerController{configuration=22b9b92e-5130-420b-8e07-01898f43eed6,state=UNCONFIGURED} 2025-08-16T04:20:03,454 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating Jetty server instance using configuration properties. 2025-08-16T04:20:03,467 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Processing Jetty configuration from files: [etc/jetty.xml] 2025-08-16T04:20:03,559 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Found configured connector "jetty-default": 0.0.0.0:8181 2025-08-16T04:20:03,561 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Using configured jetty-default@13d6d75a{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} as non secure connector for address: 0.0.0.0:8181 2025-08-16T04:20:03,562 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Eagerly starting Jetty thread pool QueuedThreadPool[qtp273192795]@1048975b{STOPPED,0<=0<=200,i=0,r=-1,q=0}[NO_TRY] 2025-08-16T04:20:03,565 | INFO | paxweb-config-3-thread-1 (change controller) | JettyFactory | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding JMX support to Jetty server 2025-08-16T04:20:03,583 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Starting server controller org.ops4j.pax.web.service.jetty.internal.JettyServerController 2025-08-16T04:20:03,583 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting JettyServerController{configuration=22b9b92e-5130-420b-8e07-01898f43eed6,state=STOPPED} 2025-08-16T04:20:03,583 | INFO | paxweb-config-3-thread-1 (change controller) | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Server@29ef94b9{STOPPED}[9.4.57.v20241219] 2025-08-16T04:20:03,584 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 21.0.5+11-Ubuntu-1ubuntu122.04 2025-08-16T04:20:03,598 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | DefaultSessionIdManager workerName=node0 2025-08-16T04:20:03,599 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | No SessionScavenger set, using defaults 2025-08-16T04:20:03,601 | INFO | paxweb-config-3-thread-1 (change controller) | session | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | node0 Scavenging every 600000ms 2025-08-16T04:20:03,629 | INFO | paxweb-config-3-thread-1 (change controller) | AbstractConnector | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started jetty-default@13d6d75a{HTTP/1.1, (http/1.1)}{0.0.0.0:8181} 2025-08-16T04:20:03,630 | INFO | paxweb-config-3-thread-1 (change controller) | Server | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started @3608ms 2025-08-16T04:20:03,631 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpService factory 2025-08-16T04:20:03,632 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.http.core_4.4.7 [105]] 2025-08-16T04:20:03,644 | INFO | paxweb-config-3-thread-1 (change controller) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.apache.karaf.web.core_4.4.7 [124]] 2025-08-16T04:20:03,648 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.http.core/4.4.7 2025-08-16T04:20:03,652 | INFO | activator-1-thread-2 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.web.core/4.4.7 2025-08-16T04:20:03,656 | INFO | paxweb-config-3-thread-1 (change controller) | Activator | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering HttpServiceRuntime 2025-08-16T04:20:03,658 | INFO | HttpService->WarExtender (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-war_8.0.30 [392]] 2025-08-16T04:20:03,660 | INFO | HttpService->Whiteboard (add HttpService) | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard_8.0.30 [393]] 2025-08-16T04:20:03,662 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-08-16T04:20:03,663 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)}", size=2} 2025-08-16T04:20:03,663 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-3,contextPath='/'} 2025-08-16T04:20:03,693 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-1,name='default',path='/',bundle=org.ops4j.pax.web.pax-web-extender-whiteboard,context=(supplier)} to o.o.p.w.s.j.i.PaxWebServletContextHandler@11050798{/,null,STOPPED} 2025-08-16T04:20:03,695 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@11050798{/,null,STOPPED} 2025-08-16T04:20:03,721 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.apache.karaf.jdbc.core/4.4.7 2025-08-16T04:20:03,784 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.jolokia.osgi_1.7.2 [155]] 2025-08-16T04:20:03,793 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c9039b4,contexts=[{HS,OCM-5,context:575847817,/}]} 2025-08-16T04:20:03,793 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c9039b4,contexts=null}", size=3} 2025-08-16T04:20:03,794 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{HS,id=OCM-5,name='context:575847817',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:575847817',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2252bd89}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@11050798{/,null,STOPPED} 2025-08-16T04:20:03,795 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@11050798{/,null,STOPPED} 2025-08-16T04:20:03,795 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-4,name='org.jolokia.osgi.servlet.JolokiaServlet',alias='/jolokia',urlPatterns=[/jolokia/*],servlet=org.jolokia.osgi.servlet.JolokiaServlet@4c9039b4,contexts=[{HS,OCM-5,context:575847817,/}]} 2025-08-16T04:20:03,799 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/" with default Osgi Context OsgiContextModel{HS,id=OCM-5,name='context:575847817',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:575847817',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2252bd89}} 2025-08-16T04:20:03,815 | INFO | paxweb-config-3-thread-1 | osgi | 155 - org.jolokia.osgi - 1.7.2 | No access restrictor found, access to any MBean is allowed 2025-08-16T04:20:03,839 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@11050798{/,null,AVAILABLE} 2025-08-16T04:20:03,840 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{HS,id=OCM-5,name='context:575847817',path='/',bundle=org.jolokia.osgi,context=WebContainerContextWrapper{bundle=org.jolokia.osgi_1.7.2 [155],contextId='context:575847817',delegate=org.jolokia.osgi.security.ServiceAuthenticationHttpContext@2252bd89}}} as OSGi service for "/" context path 2025-08-16T04:20:03,933 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-08-16T04:20:03,958 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CustomFilterAdapterConfigurationImpl | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Custom filter properties updated: {service.pid=org.opendaylight.aaa.filterchain, osgi.ds.satisfying.condition.target=(osgi.condition.id=true), customFilterList=, component.name=org.opendaylight.aaa.filterchain.configuration.impl.CustomFilterAdapterConfigurationImpl, felix.fileinstall.filename=file:/tmp/karaf-0.23.0/etc/org.opendaylight.aaa.filterchain.cfg, component.id=4, Filter.target=(org.opendaylight.aaa.filterchain.filter=true)} 2025-08-16T04:20:03,978 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.AuthenticationService)] 2025-08-16T04:20:03,999 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.aaa.shiro_0.21.0 [172]] 2025-08-16T04:20:04,000 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T04:20:04,001 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]}", size=1} 2025-08-16T04:20:04,001 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-8,name='MoonTokenEndpoint',urlPatterns=[/moon],contexts=[{WB,OCM-1,default,/}]} 2025-08-16T04:20:04,007 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.web.servlet.ServletSupport), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:20:04,029 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.api.IIDMStore), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:20:04,060 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | FileAkkaConfigurationReader | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | File-based Pekko configuration reader enabled 2025-08-16T04:20:04,073 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider starting 2025-08-16T04:20:04,243 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | ActorSystemProviderImpl | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating new ActorSystem 2025-08-16T04:20:04,513 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Slf4jLogger | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Slf4jLogger started 2025-08-16T04:20:04,721 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | ArteryTransport | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Remoting started with transport [Artery tcp]; listening on address [pekko://opendaylight-cluster-data@10.30.170.30:2550] with UID [5484923156571569346] 2025-08-16T04:20:04,730 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Starting up, Pekko version [1.0.3] ... 2025-08-16T04:20:04,774 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Registered cluster JMX MBean [pekko:type=Cluster] 2025-08-16T04:20:04,777 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Started up successfully 2025-08-16T04:20:04,813 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | SplitBrainResolver | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | SBR started. Config: strategy [KeepMajority], stable-after [7 seconds], down-all-when-unstable [5250 milliseconds], selfUniqueAddress [pekko://opendaylight-cluster-data@10.30.170.30:2550#5484923156571569346], selfDc [default]. 2025-08-16T04:20:04,950 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiActorSystemProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Actor System provider started 2025-08-16T04:20:04,957 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | FileModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Shard configuration provider started 2025-08-16T04:20:04,978 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4. Missing service: [org.opendaylight.infrautils.diagstatus.DiagStatusServiceMBean] 2025-08-16T04:20:05,047 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational#-120171616] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-topology-operational] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-topology-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,047 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#-534131459] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-default-operational] was not delivered. [2] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-default-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,145 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational#141404316] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-toaster-operational] was not delivered. [3] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-toaster-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,145 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-inventory-operational] was not delivered. [4] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-inventory-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,146 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,147 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational#141404316] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-toaster-operational] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-toaster-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,148 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config#467029840] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-topology-config] was not delivered. [7] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-topology-config] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,149 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-inventory-operational] was not delivered. [8] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-operational/member-3-shard-inventory-operational] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,149 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config] was not delivered. [9] dead letters encountered. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-inventory-config] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,151 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | EmptyLocalActorRef | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Message [org.opendaylight.controller.cluster.raft.messages.AppendEntries] from Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config#467029840] to Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-topology-config] was not delivered. [10] dead letters encountered, no more dead letters will be logged in next [5.000 min]. If this is not an expected behavior then Actor[pekko://opendaylight-cluster-data/user/shardmanager-config/member-3-shard-topology-config] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'pekko.log-dead-letters' and 'pekko.log-dead-letters-during-shutdown'. 2025-08-16T04:20:05,151 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Received InitJoinAck message from [Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/system/cluster/core/daemon#546538922]] to [pekko://opendaylight-cluster-data@10.30.170.30:2550] 2025-08-16T04:20:05,162 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | ThreadFactory created: SystemReadyService 2025-08-16T04:20:05,163 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | Now starting to provide full system readiness status updates (see TestBundleDiag's logs)... 2025-08-16T04:20:05,164 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DiagStatusServiceImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service started 2025-08-16T04:20:05,168 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | MBeanUtils | 198 - org.opendaylight.infrautils.diagstatus-api - 7.1.4 | MBean registration for org.opendaylight.infrautils.diagstatus:type=SvcStatus SUCCESSFUL. 2025-08-16T04:20:05,169 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DiagStatusServiceMBeanImpl | 199 - org.opendaylight.infrautils.diagstatus-impl - 7.1.4 | Diagnostic Status Service management started 2025-08-16T04:20:05,169 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.infrautils.diagstatus-shell/7.1.4 2025-08-16T04:20:05,190 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos() started... 2025-08-16T04:20:05,205 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.mastership.MastershipChangeServiceManager), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,215 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-08-16T04:20:05,226 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | Cluster | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Cluster Node [pekko://opendaylight-cluster-data@10.30.170.30:2550] - Welcome from [pekko://opendaylight-cluster-data@10.30.170.196:2550] 2025-08-16T04:20:05,238 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0. Missing service: [org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager] 2025-08-16T04:20:05,252 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (objectClass=org.opendaylight.mdsal.binding.api.NotificationService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T04:20:05,322 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationServiceFactory), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:20:05,330 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer))] 2025-08-16T04:20:05,331 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.applications.reconciliation.ReconciliationManager), (objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,331 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.FlowGroupCacheManager), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,335 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | ReconciliationManager started 2025-08-16T04:20:05,335 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.applications.reconciliation-framework/0.20.0 2025-08-16T04:20:05,336 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.serviceutils.srm.ServiceRecoveryRegistry), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,340 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | MessageIntelligenceAgencyImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Registered MBean org.opendaylight.openflowplugin.impl.statistics.ofpspecific:type=MessageIntelligenceAgencyMXBean 2025-08-16T04:20:05,342 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.impl/0.20.0 2025-08-16T04:20:05,364 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.openflowplugin.applications.frm.recovery.OpenflowServiceRecoveryHandler), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,365 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.RpcService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:05,366 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OpenflowServiceRecoveryHandlerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Registering openflowplugin service recovery handlers 2025-08-16T04:20:05,376 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Command registration delayed for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0. Missing service: [org.opendaylight.mdsal.binding.api.DataBroker, org.opendaylight.serviceutils.srm.spi.RegistryControl] 2025-08-16T04:20:05,383 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | SimpleBindingDOMCodecFactory | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Binding/DOM Codec enabled 2025-08-16T04:20:05,389 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activating 2025-08-16T04:20:05,390 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingDOMCodec | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec activated 2025-08-16T04:20:05,401 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DefaultBindingRuntimeGenerator | 328 - org.opendaylight.yangtools.binding-generator - 14.0.14 | Binding/YANG type support activated 2025-08-16T04:20:05,411 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activating 2025-08-16T04:20:05,412 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Binding Runtime activated 2025-08-16T04:20:05,467 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime starting 2025-08-16T04:20:05,487 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | KarafFeaturesSupport | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Will attempt to integrate with Karaf FeaturesService 2025-08-16T04:20:05,977 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | NettyTransportSupport | 284 - org.opendaylight.netconf.transport-api - 9.0.0 | Netty transport backed by epoll(2) 2025-08-16T04:20:06,198 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | SharedEffectiveModelContextFactory | 379 - org.opendaylight.yangtools.yang-parser-impl - 14.0.14 | Using weak references 2025-08-16T04:20:08,206 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiModuleInfoSnapshotImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | EffectiveModelContext generation 1 activated 2025-08-16T04:20:08,208 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | DOM Schema services activated 2025-08-16T04:20:08,208 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiDOMSchemaService | 251 - org.opendaylight.mdsal.mdsal-dom-schema-osgi - 14.0.13 | Updating context to generation 1 2025-08-16T04:20:08,216 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DOMRpcRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM RPC/Action router started 2025-08-16T04:20:08,224 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service starting 2025-08-16T04:20:08,227 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiRemoteOpsProvider | 197 - org.opendaylight.controller.sal-remoterpc-connector - 11.0.0 | Remote Operations service started 2025-08-16T04:20:08,348 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-32 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage MAPPED 2025-08-16T04:20:09,031 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingRuntimeContextImpl | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | BindingRuntimeContext generation 1 activated 2025-08-16T04:20:09,050 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBindingDOMCodecServicesImpl | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Binding/DOM Codec generation 1 activated 2025-08-16T04:20:09,050 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | GlobalBindingDOMCodecServices | 326 - org.opendaylight.yangtools.binding-data-codec-osgi - 14.0.14 | Global Binding/DOM Codec activated with generation 1 2025-08-16T04:20:09,055 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiDatastoreContextIntrospectorFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore Context Introspector activated 2025-08-16T04:20:09,057 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION starting 2025-08-16T04:20:09,278 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : config 2025-08-16T04:20:09,279 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:20:09,279 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:20:09,284 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-config 2025-08-16T04:20:09,306 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-config 2025-08-16T04:20:09,315 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Recovery complete 2025-08-16T04:20:09,339 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T04:20:09,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T04:20:09,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T04:20:09,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T04:20:09,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T04:20:09,340 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:20:09,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:20:09,342 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:20:09,342 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:20:09,342 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:20:09,362 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store config is using tell-based protocol 2025-08-16T04:20:09,365 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:20:09,366 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractModuleShardConfigProvider | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Config file exists - reading config from it 2025-08-16T04:20:09,366 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:20:09,367 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T04:20:09,367 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config 2025-08-16T04:20:09,368 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:20:09,368 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T04:20:09,369 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config 2025-08-16T04:20:09,370 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL starting 2025-08-16T04:20:09,370 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:20:09,371 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T04:20:09,371 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config 2025-08-16T04:20:09,372 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:20:09,372 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T04:20:09,373 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-config with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config 2025-08-16T04:20:09,376 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Create data store instance of type : operational 2025-08-16T04:20:09,377 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Creating ShardManager : shardmanager-operational 2025-08-16T04:20:09,378 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-config: Shard created, persistent : true 2025-08-16T04:20:09,378 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-config: Shard created, persistent : true 2025-08-16T04:20:09,378 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: Shard created, persistent : true 2025-08-16T04:20:09,379 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-config: Shard created, persistent : true 2025-08-16T04:20:09,389 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Starting ShardManager shard-manager-operational 2025-08-16T04:20:09,400 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DistributedDataStoreFactory | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Data store operational is using tell-based protocol 2025-08-16T04:20:09,406 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Recovery complete 2025-08-16T04:20:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-2}, address: pekko://opendaylight-cluster-data@10.30.170.196:2550 2025-08-16T04:20:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T04:20:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T04:20:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T04:20:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T04:20:09,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-3}, address: pekko://opendaylight-cluster-data@10.30.170.30:2550 2025-08-16T04:20:09,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-08-16T04:20:09,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-08-16T04:20:09,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-08-16T04:20:09,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-08-16T04:20:09,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received MemberUp: memberName: MemberName{name=member-1}, address: pekko://opendaylight-cluster-data@10.30.170.62:2550 2025-08-16T04:20:09,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:20:09,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:20:09,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:20:09,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:20:09,409 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiBlockingBindingNormalizer | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter activated 2025-08-16T04:20:09,410 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:20:09,410 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T04:20:09,410 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-default-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational 2025-08-16T04:20:09,411 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:20:09,411 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T04:20:09,411 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-topology-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational 2025-08-16T04:20:09,412 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:20:09,412 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Shard created, persistent : false 2025-08-16T04:20:09,412 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T04:20:09,412 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-inventory-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational 2025-08-16T04:20:09,412 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-1-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:20:09,413 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-2-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T04:20:09,413 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardInformation | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | updatePeerAddress for peer member-3-shard-toaster-operational with address pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational 2025-08-16T04:20:09,413 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-operational: Shard created, persistent : false 2025-08-16T04:20:09,414 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-toaster-operational: Shard created, persistent : false 2025-08-16T04:20:09,414 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-operational: Shard created, persistent : false 2025-08-16T04:20:09,423 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-toaster-operational/member-3-shard-toaster-operational-notifier#1101440300 created and ready for shard:member-3-shard-toaster-operational 2025-08-16T04:20:09,424 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-default-operational/member-3-shard-default-operational-notifier#1102696105 created and ready for shard:member-3-shard-default-operational 2025-08-16T04:20:09,425 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-topology-operational/member-3-shard-topology-operational-notifier#-1395663334 created and ready for shard:member-3-shard-topology-operational 2025-08-16T04:20:09,425 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for MountPointService activated 2025-08-16T04:20:09,425 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-operational/member-3-shard-inventory-operational/member-3-shard-inventory-operational-notifier#-998629333 created and ready for shard:member-3-shard-inventory-operational 2025-08-16T04:20:09,426 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-inventory-config/member-3-shard-inventory-config-notifier#-1291086039 created and ready for shard:member-3-shard-inventory-config 2025-08-16T04:20:09,426 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-default-config/member-3-shard-default-config-notifier#-1531119632 created and ready for shard:member-3-shard-default-config 2025-08-16T04:20:09,427 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Starting recovery with journal batch size 1 2025-08-16T04:20:09,427 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-toaster-config/member-3-shard-toaster-config-notifier#591520203 created and ready for shard:member-3-shard-toaster-config 2025-08-16T04:20:09,427 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Starting recovery with journal batch size 1 2025-08-16T04:20:09,428 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Starting recovery with journal batch size 1 2025-08-16T04:20:09,428 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Starting recovery with journal batch size 1 2025-08-16T04:20:09,429 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Starting recovery with journal batch size 1 2025-08-16T04:20:09,429 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier:pekko://opendaylight-cluster-data@10.30.170.30:2550/user/shardmanager-config/member-3-shard-topology-config/member-3-shard-topology-config-notifier#-214696007 created and ready for shard:member-3-shard-topology-config 2025-08-16T04:20:09,429 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Starting recovery with journal batch size 1 2025-08-16T04:20:09,429 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Starting recovery with journal batch size 1 2025-08-16T04:20:09,430 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | RaftActor | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Starting recovery with journal batch size 1 2025-08-16T04:20:09,437 | INFO | opendaylight-cluster-data-pekko.persistence.dispatchers.default-plugin-dispatcher-47 | SegmentedFileJournal | 191 - org.opendaylight.controller.sal-akka-segmented-journal - 11.0.0 | Initialized with root directory segmented-journal with storage DISK 2025-08-16T04:20:09,438 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DOMNotificationRouter | 250 - org.opendaylight.mdsal.mdsal-dom-broker - 14.0.13 | DOM Notification Router started 2025-08-16T04:20:09,440 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.mdsal.binding.api.NotificationPublishService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T04:20:09,441 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationService activated 2025-08-16T04:20:09,444 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService), (objectClass=org.opendaylight.mdsal.eos.binding.api.EntityOwnershipService)] 2025-08-16T04:20:09,444 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for NotificationPublishService activated 2025-08-16T04:20:09,451 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:09,451 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService), (objectClass=org.opendaylight.mdsal.binding.api.RpcProviderService)] 2025-08-16T04:20:09,452 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcService activated 2025-08-16T04:20:09,453 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:20:09,457 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for RpcProviderService activated 2025-08-16T04:20:09,505 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClusterSingletonProxy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Singleton identified at [pekko://opendaylight-cluster-data@10.30.170.196:2550/system/singletonManagerOwnerSupervisor/OwnerSupervisor] 2025-08-16T04:20:09,506 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: journal open: applyTo=0 2025-08-16T04:20:09,508 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: journal open: applyTo=18 2025-08-16T04:20:09,509 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: journal open: applyTo=0 2025-08-16T04:20:09,509 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: journal open: applyTo=0 2025-08-16T04:20:09,509 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: journal open: applyTo=0 2025-08-16T04:20:09,509 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: journal open: applyTo=0 2025-08-16T04:20:09,505 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: journal open: applyTo=0 2025-08-16T04:20:09,518 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | EnabledRaftStorage | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: journal open: applyTo=167 2025-08-16T04:20:09,537 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | ClusterSingletonManager | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | ClusterSingletonManager state change [Start -> Younger] 2025-08-16T04:20:09,543 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,543 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,543 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,546 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,551 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , received role change from null to Follower 2025-08-16T04:20:09,551 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , received role change from null to Follower 2025-08-16T04:20:09,554 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:09,554 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (objectClass=org.opendaylight.openflowplugin.applications.deviceownershipservice.DeviceOwnershipService)] 2025-08-16T04:20:09,560 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config (Follower): Term 5 in "AppendEntries{term=5, leaderId=member-1-shard-toaster-config, prevLogIndex=-1, prevLogTerm=-1, leaderCommit=-1, replicatedToAllIndex=-1, payloadVersion=13, recipientRaftVersion=5, leaderRaftVersion=5, entries==[]}" message is greater than follower's term 4 - updating term 2025-08-16T04:20:09,560 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,561 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , received role change from null to Follower 2025-08-16T04:20:09,563 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | Recovery snapshot applied for member-3-shard-default-operational in 5.680 μs: snapshotIndex=-1, snapshotTerm=-1, journal-size=0 2025-08-16T04:20:09,564 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Recovery completed in 1.236 ms - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,564 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | Recovery snapshot applied for member-3-shard-topology-operational in 1.260 μs: snapshotIndex=-1, snapshotTerm=-1, journal-size=0 2025-08-16T04:20:09,564 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , received role change from null to Follower 2025-08-16T04:20:09,564 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Recovery completed in 482.2 μs - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,565 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-1-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-inventory-operational 2025-08-16T04:20:09,565 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Peer address for peer member-2-shard-inventory-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational 2025-08-16T04:20:09,565 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-1-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-topology-operational 2025-08-16T04:20:09,565 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:20:09,566 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , received role change from null to Follower 2025-08-16T04:20:09,566 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Peer address for peer member-2-shard-topology-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-topology-operational 2025-08-16T04:20:09,566 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-1-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-toaster-operational 2025-08-16T04:20:09,567 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-operational: Peer address for peer member-2-shard-toaster-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-toaster-operational 2025-08-16T04:20:09,567 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-inventory-operational from null to Follower 2025-08-16T04:20:09,567 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:20:09,567 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:20:09,567 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-topology-operational from null to Follower 2025-08-16T04:20:09,568 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-1-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational 2025-08-16T04:20:09,568 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-toaster-operational from null to Follower 2025-08-16T04:20:09,569 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Peer address for peer member-2-shard-default-operational set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-default-operational 2025-08-16T04:20:09,570 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionService activated 2025-08-16T04:20:09,574 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for ActionProviderService activated 2025-08-16T04:20:09,574 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | DynamicBindingAdapter | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | 8 DOMService trackers started 2025-08-16T04:20:09,575 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-1-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-inventory-config 2025-08-16T04:20:09,575 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.mdsal.binding.api.DataBroker)] 2025-08-16T04:20:09,576 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.mdsal.binding.api.DataBroker), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.yangtools.binding.data.codec.api.BindingNormalizedNodeSerializer)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:09,575 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Peer address for peer member-2-shard-inventory-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config 2025-08-16T04:20:09,576 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-1-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-toaster-config 2025-08-16T04:20:09,576 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-toaster-config: Peer address for peer member-2-shard-toaster-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-toaster-config 2025-08-16T04:20:09,577 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done false 2025-08-16T04:20:09,578 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-toaster-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:20:09,578 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-operational , registered listener pekko://opendaylight-cluster-data/user/shardmanager-operational 2025-08-16T04:20:09,578 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from null to Follower 2025-08-16T04:20:09,578 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:20:09,578 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received role changed for member-3-shard-default-operational from null to Follower 2025-08-16T04:20:09,579 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@79fb921c 2025-08-16T04:20:09,580 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-toaster-config from null to Follower 2025-08-16T04:20:09,580 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from null to Follower 2025-08-16T04:20:09,581 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | GlobalBindingRuntimeContext | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Global BindingRuntimeContext generation 1 activated 2025-08-16T04:20:09,582 | INFO | Start Level: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | OSGiModelRuntime | 333 - org.opendaylight.yangtools.binding-runtime-osgi - 14.0.14 | Model Runtime started 2025-08-16T04:20:09,585 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PekkoRecovery | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Recovery completed - Switching actor to Follower - last log index = -1, last log term = -1, snapshot index = -1, snapshot term = -1, journal size = 0 2025-08-16T04:20:09,586 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , received role change from null to Follower 2025-08-16T04:20:09,586 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-1-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-topology-config 2025-08-16T04:20:09,587 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-config: Peer address for peer member-2-shard-topology-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-topology-config 2025-08-16T04:20:09,587 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-topology-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:20:09,587 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-topology-config from null to Follower 2025-08-16T04:20:09,612 | INFO | Framework Event Dispatcher: Equinox Container: 5f8e399e-77e6-4df2-a109-ed1d528458c4 | Main | 3 - org.ops4j.pax.logging.pax-logging-api - 2.2.8 | Karaf started in 8s. Bundle stats: 399 active, 400 total 2025-08-16T04:20:09,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done false 2025-08-16T04:20:09,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@10adbc47 2025-08-16T04:20:09,634 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-inventory-config status sync done true 2025-08-16T04:20:09,660 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): Term 5 in "AppendEntries{term=5, leaderId=member-1-shard-default-operational, prevLogIndex=58, prevLogTerm=4, leaderCommit=-1, replicatedToAllIndex=-1, payloadVersion=13, recipientRaftVersion=5, leaderRaftVersion=5, entries==[]}" message is greater than follower's term 4 - updating term 2025-08-16T04:20:09,660 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): Term 5 in "AppendEntries{term=5, leaderId=member-1-shard-topology-operational, prevLogIndex=26, prevLogTerm=4, leaderCommit=-1, replicatedToAllIndex=-1, payloadVersion=13, recipientRaftVersion=5, leaderRaftVersion=5, entries==[]}" message is greater than follower's term 4 - updating term 2025-08-16T04:20:09,663 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): The followers log is empty and the senders prevLogIndex is 58 2025-08-16T04:20:09,663 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): Follower is out-of-sync so sending negative reply: AppendEntriesReply{term=5, success=false, followerId=member-3-shard-default-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5} 2025-08-16T04:20:09,664 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): The followers log is empty and the senders prevLogIndex is 26 2025-08-16T04:20:09,665 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): Follower is out-of-sync so sending negative reply: AppendEntriesReply{term=5, success=false, followerId=member-3-shard-topology-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5} 2025-08-16T04:20:09,664 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@577408c5 2025-08-16T04:20:09,666 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done false 2025-08-16T04:20:09,666 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done false 2025-08-16T04:20:09,666 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@388af3b2 2025-08-16T04:20:09,678 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done true 2025-08-16T04:20:09,685 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done true 2025-08-16T04:20:09,686 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational (Follower): Snapshot received from leader: member-1-shard-topology-operational 2025-08-16T04:20:09,688 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational (Follower): Snapshot received from leader: member-1-shard-default-operational 2025-08-16T04:20:09,688 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done false 2025-08-16T04:20:09,689 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: Applying snapshot on follower: PlainSnapshotSource{io=MemoryStreamSource{size=51501}} 2025-08-16T04:20:09,689 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done false 2025-08-16T04:20:09,689 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: Applying snapshot on follower: PlainSnapshotSource{io=MemoryStreamSource{size=975}} 2025-08-16T04:20:09,702 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-topology-operational: snapshot is durable as of 2025-08-16T04:20:09.696471762Z 2025-08-16T04:20:09,703 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-operational: Applying snapshot 2025-08-16T04:20:09,710 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-topology-operational: Done applying snapshot 2025-08-16T04:20:09,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-operational: snapshot is durable as of 2025-08-16T04:20:09.725170202Z 2025-08-16T04:20:09,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Applying snapshot 2025-08-16T04:20:09,729 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-default-operational: Done applying snapshot 2025-08-16T04:20:09,757 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , received role change from null to Follower 2025-08-16T04:20:09,757 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config (Follower): Term 6 in "AppendEntries{term=6, leaderId=member-1-shard-default-config, prevLogIndex=165, prevLogTerm=5, leaderCommit=-1, replicatedToAllIndex=-1, payloadVersion=13, recipientRaftVersion=5, leaderRaftVersion=5, entries==[]}" message is greater than follower's term 5 - updating term 2025-08-16T04:20:09,761 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-1-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config 2025-08-16T04:20:09,761 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | PeerInfos | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-default-config: Peer address for peer member-2-shard-default-config set to pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-default-config 2025-08-16T04:20:09,762 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-default-config , registered listener pekko://opendaylight-cluster-data/user/shardmanager-config 2025-08-16T04:20:09,762 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done false 2025-08-16T04:20:09,762 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@80c848a 2025-08-16T04:20:09,763 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-default-config from null to Follower 2025-08-16T04:20:09,768 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-default-config status sync done true 2025-08-16T04:20:09,800 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done false 2025-08-16T04:20:09,800 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done false 2025-08-16T04:20:09,801 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@1c881ac6 2025-08-16T04:20:09,801 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@2633c6f4 2025-08-16T04:20:09,801 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: All Shards are ready - data store config is ready 2025-08-16T04:20:09,804 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type CONFIGURATION activated 2025-08-16T04:20:09,804 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type CONFIGURATION started 2025-08-16T04:20:09,919 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): The followers log is empty and the senders prevLogIndex is 950 2025-08-16T04:20:09,920 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Follower is out-of-sync so sending negative reply: AppendEntriesReply{term=5, success=false, followerId=member-3-shard-inventory-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5} 2025-08-16T04:20:09,921 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@7316d4d4 2025-08-16T04:20:09,921 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational: All Shards are ready - data store operational is ready 2025-08-16T04:20:09,921 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done false 2025-08-16T04:20:09,924 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiDOMStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Datastore service type OPERATIONAL activated 2025-08-16T04:20:09,927 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Cannot append entries because the replicatedToAllIndex 949 does not appear to be in the in-memory journal - lastIndex: -1, snapshotIndex: -1, snapshotTerm: -1 2025-08-16T04:20:09,927 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Follower is out-of-sync so sending negative reply: AppendEntriesReply{term=5, success=false, followerId=member-3-shard-inventory-operational, logLastIndex=-1, logLastTerm=-1, forceInstallSnapshot=false, needsLeaderAddress=false, payloadVersion=13, raftVersion=5, recipientRaftVersion=5} 2025-08-16T04:20:09,928 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done true 2025-08-16T04:20:09,929 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Follower | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational (Follower): Snapshot received from leader: member-2-shard-inventory-operational 2025-08-16T04:20:09,929 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done false 2025-08-16T04:20:09,929 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: Applying snapshot on follower: PlainSnapshotSource{io=MemoryStreamSource{size=7893}} 2025-08-16T04:20:09,944 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiClusterAdmin | 193 - org.opendaylight.controller.sal-cluster-admin-impl - 11.0.0 | Cluster Admin services started 2025-08-16T04:20:09,946 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-operational: snapshot is durable as of 2025-08-16T04:20:09.944023448Z 2025-08-16T04:20:09,946 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-operational: Applying snapshot 2025-08-16T04:20:09,947 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | ShardSnapshotCohort | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-operational: Done applying snapshot 2025-08-16T04:20:09,957 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | ConcurrentDOMDataBroker | 358 - org.opendaylight.yangtools.util - 14.0.14 | ThreadFactory created: CommitFutures 2025-08-16T04:20:09,959 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | DataBrokerCommitExecutor | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker commit exector started 2025-08-16T04:20:09,961 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | ConcurrentDOMDataBroker | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | DOM Data Broker started 2025-08-16T04:20:09,965 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.encrypt.AAAEncryptionService), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:20:09,965 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractAdaptedService | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding/DOM adapter for DataBroker activated 2025-08-16T04:20:10,006 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#-1990541384], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T04:20:10,007 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#-1990541384], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:20:10,014 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-config/member-1-shard-default-config#-1990541384], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 7.164 ms 2025-08-16T04:20:10,037 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 is waiting for dependencies [Initial app config AaaCertServiceConfig] 2025-08-16T04:20:10,061 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiPasswordServiceConfigBootstrap | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | Listening for password service configuration 2025-08-16T04:20:10,063 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (objectClass=org.opendaylight.aaa.api.IIDMStore), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:20:10,076 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), Initial app config DatastoreConfig, (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), (&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:20:10,088 | ERROR | opendaylight-cluster-data-notification-dispatcher-51 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | bundle org.opendaylight.aaa.idm-store-h2:0.21.0 (167)[org.opendaylight.aaa.datastore.h2.H2Store(5)] : Constructor argument 0 in class class org.opendaylight.aaa.datastore.h2.H2Store has unsupported type org.opendaylight.aaa.datastore.h2.ConnectionProvider 2025-08-16T04:20:10,092 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default iteration count=20000 2025-08-16T04:20:10,092 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will utilize default algorithm=SHA-512 2025-08-16T04:20:10,092 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | DefaultPasswordHashService | 170 - org.opendaylight.aaa.password-service-impl - 0.21.0 | DefaultPasswordHashService will not utilize a private salt, since none was configured 2025-08-16T04:20:10,096 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth), Initial app config ShiroConfiguration] 2025-08-16T04:20:10,119 | INFO | opendaylight-cluster-data-notification-dispatcher-51 | H2Store | 167 - org.opendaylight.aaa.idm-store-h2 - 0.21.0 | H2 IDMStore activated 2025-08-16T04:20:10,124 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.api.password.service.PasswordHashService), (objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:20:10,130 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager), (objectClass=org.opendaylight.aaa.api.PasswordCredentialAuth)] 2025-08-16T04:20:10,131 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 is waiting for dependencies [(objectClass=org.opendaylight.aaa.cert.api.ICertificateManager)] 2025-08-16T04:20:10,132 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | EOSClusterSingletonServiceProvider | 257 - org.opendaylight.mdsal.mdsal-singleton-impl - 14.0.13 | Cluster Singleton Service started 2025-08-16T04:20:10,139 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | ietf-yang-library writer registered 2025-08-16T04:20:10,150 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-inventory-operational status sync done true 2025-08-16T04:20:10,153 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | AAAEncryptionServiceImpl | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | AAAEncryptionService activated 2025-08-16T04:20:10,160 | INFO | opendaylight-cluster-data-notification-dispatcher-50 | OSGiEncryptionServiceConfigurator | 165 - org.opendaylight.aaa.encrypt-service-impl - 0.21.0 | Encryption Service enabled 2025-08-16T04:20:10,185 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | ArbitratorReconciliationManagerImpl | 296 - org.opendaylight.openflowplugin.applications.arbitratorreconciliation-impl - 0.20.0 | ArbitratorReconciliationManager has started successfully. 2025-08-16T04:20:10,217 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-topology-operational status sync done true 2025-08-16T04:20:10,226 | INFO | Blueprint Extender: 2 | AaaCertMdsalProvider | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCertMdsalProvider Initialized 2025-08-16T04:20:10,236 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 0 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#-534131459], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent} 2025-08-16T04:20:10,237 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=0} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#-534131459], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} 2025-08-16T04:20:10,241 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=0} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=0, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/user/shardmanager-operational/member-1-shard-default-operational#-534131459], sessionId=0, version=POTASSIUM, maxMessages=1000, cookie=0, shard=default, dataTree=absent}} in 3.859 ms 2025-08-16T04:20:10,255 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:10,255 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-default-operational status sync done true 2025-08-16T04:20:10,299 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker))] 2025-08-16T04:20:10,303 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:10,315 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | DeviceOwnershipService started 2025-08-16T04:20:10,322 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-operational Received follower initial sync status for member-3-shard-toaster-operational status sync done true 2025-08-16T04:20:10,322 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-topology-config status sync done true 2025-08-16T04:20:10,329 | INFO | Blueprint Extender: 2 | LazyBindingList | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for lists larger than 16 element(s) 2025-08-16T04:20:10,382 | INFO | Blueprint Extender: 3 | LLDPSpeaker | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | LLDPSpeaker started, it will send LLDP frames each 5 seconds 2025-08-16T04:20:10,399 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:20:10,401 | INFO | Blueprint Extender: 2 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | Certificate Manager service has been initialized 2025-08-16T04:20:10,406 | INFO | Blueprint Extender: 2 | CertificateManagerService | 163 - org.opendaylight.aaa.cert - 0.21.0 | AaaCert Rpc Service has been initialized 2025-08-16T04:20:10,409 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.cert/0.21.0 has been started 2025-08-16T04:20:10,414 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.cert_0.21.0 [163] was successfully created 2025-08-16T04:20:10,442 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | DefaultConfigPusher | 301 - org.opendaylight.openflowplugin.applications.of-switch-config-pusher - 0.20.0 | DefaultConfigPusher has started. 2025-08-16T04:20:10,443 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(&(|(type=default)(!(type=*)))(objectClass=org.opendaylight.mdsal.binding.api.DataBroker)), (objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:10,447 | INFO | Blueprint Extender: 3 | NodeConnectorInventoryEventTranslator | 300 - org.opendaylight.openflowplugin.applications.lldp-speaker - 0.20.0 | NodeConnectorInventoryEventTranslator has started. 2025-08-16T04:20:10,449 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.lldp-speaker/0.20.0 has been started 2025-08-16T04:20:10,451 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Checking if default entries must be created in IDM store 2025-08-16T04:20:10,450 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.lldp-speaker_0.20.0 [300] was successfully created 2025-08-16T04:20:10,460 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 is waiting for dependencies [(objectClass=org.opendaylight.openflowplugin.api.openflow.configuration.ConfigurationService)] 2025-08-16T04:20:10,470 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config Received follower initial sync status for member-3-shard-toaster-config status sync done true 2025-08-16T04:20:10,492 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | FlowCapableTopologyProvider | 304 - org.opendaylight.openflowplugin.applications.topology-manager - 0.20.0 | Topology Manager service started. 2025-08-16T04:20:10,548 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | MD-SAL configuration-based SwitchConnectionProviders started 2025-08-16T04:20:10,552 | INFO | opendaylight-cluster-data-notification-dispatcher-53 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-default-impl' 2025-08-16T04:20:10,560 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | CommandExtension | 120 - org.apache.karaf.shell.core - 4.4.7 | Registering commands for bundle org.opendaylight.openflowplugin.srm-shell/0.20.0 2025-08-16T04:20:10,561 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading properties from '(urn:opendaylight:params:xml:ns:yang:openflow:provider:config?revision=2016-05-10)openflow-provider-config' YANG file 2025-08-16T04:20:10,565 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | rpc-requests-quota configuration property was changed to '20000' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | global-notification-quota configuration property was changed to '64000' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | switch-features-mandatory configuration property was changed to 'false' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | enable-flow-removed-notification configuration property was changed to 'true' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-rpc-enabled configuration property was changed to 'false' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-count-limit configuration property was changed to '25600' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | barrier-interval-timeout-limit configuration property was changed to '500' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | echo-reply-timeout configuration property was changed to '2000' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-table-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-flow-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,566 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-group-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-meter-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-port-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | is-queue-statistics-polling-on configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | skip-table-features configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | basic-timer-delay configuration property was changed to '3000' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | maximum-timer-delay configuration property was changed to '900000' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | use-single-layer-serialization configuration property was changed to 'true' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-min-threads configuration property was changed to '1' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-max-threads configuration property was changed to '32000' 2025-08-16T04:20:10,567 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | thread-pool-timeout configuration property was changed to '60' 2025-08-16T04:20:10,568 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-rate-limit-per-min configuration property was changed to '0' 2025-08-16T04:20:10,568 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-connection-hold-time-in-seconds configuration property was changed to '0' 2025-08-16T04:20:10,568 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | device-datastore-removal-delay configuration property was changed to '500' 2025-08-16T04:20:10,568 | INFO | Blueprint Extender: 3 | OSGiConfigurationServiceFactory | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Loading configuration from 'org.opendaylight.openflowplugin' configuration file 2025-08-16T04:20:10,573 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | felix.fileinstall.filename configuration property was changed to 'file:/tmp/karaf-0.23.0/etc/org.opendaylight.openflowplugin.cfg' 2025-08-16T04:20:10,574 | INFO | Blueprint Extender: 3 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | service.pid configuration property was changed to 'org.opendaylight.openflowplugin' 2025-08-16T04:20:10,641 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] 2025-08-16T04:20:10,644 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Checking presence of configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] 2025-08-16T04:20:10,644 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiDistributedDataStore | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Distributed Datastore type OPERATIONAL started 2025-08-16T04:20:10,649 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-default-impl}] already present 2025-08-16T04:20:10,651 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | OSGiFactorySwitchConnectionConfiguration | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Configuration for (urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)switch-connection-config[{(urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:config?revision=2016-05-06)instance-name=openflow-switch-connection-provider-legacy-impl}] already present 2025-08-16T04:20:10,668 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.frm.impl.ForwardingRulesManagerImpl@29bfa80a was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T04:20:10,711 | INFO | opendaylight-cluster-data-notification-dispatcher-53 | OSGiSwitchConnectionProviders | 316 - org.opendaylight.openflowplugin.openflowjava.blueprint-config - 0.20.0 | Starting instance of type 'openflow-switch-connection-provider-legacy-impl' 2025-08-16T04:20:10,800 | INFO | Blueprint Extender: 1 | StoreBuilder | 162 - org.opendaylight.aaa.authn-api - 0.21.0 | Found default domain in IDM store, skipping insertion of default data 2025-08-16T04:20:10,800 | INFO | Blueprint Extender: 1 | AAAShiroProvider | 172 - org.opendaylight.aaa.shiro - 0.21.0 | AAAShiroProvider Session Initiated 2025-08-16T04:20:10,805 | INFO | Blueprint Extender: 2 | ForwardingRulesManagerImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | ForwardingRulesManager has started successfully. 2025-08-16T04:20:10,807 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager/0.20.0 has been started 2025-08-16T04:20:10,808 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.forwardingrules-manager_0.20.0 [299] was successfully created 2025-08-16T04:20:10,825 | INFO | Blueprint Extender: 2 | ConfigurationServiceFactoryImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | org.opendaylight.openflowplugin.applications.topology.lldp.LLDPLinkAger@669f238a was registered as configuration listener to OpenFlowPlugin configuration service 2025-08-16T04:20:10,851 | INFO | Blueprint Extender: 2 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | Starting LLDPActivator with lldpSecureKey: aa9251f8-c7c0-4322-b8d6-c3a84593bda3 2025-08-16T04:20:10,852 | INFO | Blueprint Extender: 2 | LLDPActivator | 303 - org.opendaylight.openflowplugin.applications.topology-lldp-discovery - 0.20.0 | LLDPDiscoveryListener started. 2025-08-16T04:20:10,853 | INFO | Blueprint Extender: 2 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery/0.20.0 has been started 2025-08-16T04:20:10,854 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.applications.topology-lldp-discovery_0.20.0 [303] was successfully created 2025-08-16T04:20:10,913 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:20:10,914 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:20:10,914 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 746.8 μs 2025-08-16T04:20:10,933 | INFO | Blueprint Extender: 1 | IniSecurityManagerFactory | 171 - org.opendaylight.aaa.repackaged-shiro - 0.21.0 | Realms have been explicitly set on the SecurityManager instance - auto-setting of realms will not occur. 2025-08-16T04:20:10,956 | INFO | Blueprint Extender: 3 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | OpenFlowPluginProvider started, waiting for onSystemBootReady() 2025-08-16T04:20:10,957 | INFO | Blueprint Extender: 3 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@71433cd1 2025-08-16T04:20:10,957 | INFO | Blueprint Extender: 3 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Added connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@377d05a8 2025-08-16T04:20:10,965 | INFO | Blueprint Extender: 3 | OnfExtensionProvider | 308 - org.opendaylight.openflowplugin.extension-onf - 0.20.0 | ONF Extension Provider started. 2025-08-16T04:20:10,967 | INFO | Blueprint Extender: 3 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.openflowplugin.impl/0.20.0 has been started 2025-08-16T04:20:10,967 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-08-16T04:20:10,967 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=298, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}", size=2} 2025-08-16T04:20:10,967 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-11,contextPath='/auth'} 2025-08-16T04:20:10,968 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=298, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@3bc579a{/auth,null,STOPPED} 2025-08-16T04:20:10,969 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@3bc579a{/auth,null,STOPPED} 2025-08-16T04:20:10,972 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:20:10,973 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-12,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-08-16T04:20:10,973 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:20:10,974 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.aaa.shiro_0.21.0 [172] registered context path /auth with 4 service(s) 2025-08-16T04:20:10,974 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/auth" with default Osgi Context OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=298, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}} 2025-08-16T04:20:10,976 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T04:20:10,978 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.openflowplugin.impl_0.20.0 [309] was successfully created 2025-08-16T04:20:10,978 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T04:20:10,978 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@3bc579a{/auth,null,AVAILABLE} 2025-08-16T04:20:10,982 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-9,name='RealmManagement',path='/auth',bundle=org.opendaylight.aaa.shiro,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=298, osgi.http.whiteboard.context.name=RealmManagement, service.bundleid=172, service.scope=singleton, osgi.http.whiteboard.context.path=/auth}}} as OSGi service for "/auth" context path 2025-08-16T04:20:10,983 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:20:10,984 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-13,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:20:10,984 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-13,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*, /moon/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=2} 2025-08-16T04:20:10,984 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:20:10,984 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:20:10,985 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:20:10,985 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]}", size=1} 2025-08-16T04:20:10,985 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-14,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-9,RealmManagement,/auth}]} 2025-08-16T04:20:10,986 | ERROR | Blueprint Extender: 1 | MdsalRestconfServer | 279 - org.opendaylight.netconf.restconf-server-mdsal - 9.0.0 | bundle org.opendaylight.netconf.restconf-server-mdsal:9.0.0 (279)[org.opendaylight.restconf.server.mdsal.MdsalRestconfServer(69)] : Constructor argument 5 in class class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer has unsupported type [Lorg.opendaylight.restconf.server.spi.RpcImplementation; 2025-08-16T04:20:11,009 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:20:11,009 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_LOST_NO_OWNER [wasOwner=false, isOwner=false, hasOwner=false] 2025-08-16T04:20:11,011 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : REMOTE_OWNERSHIP_CHANGED [wasOwner=false, isOwner=false, hasOwner=true] 2025-08-16T04:20:11,063 | INFO | Blueprint Extender: 1 | StoppableHttpServiceFactory | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Binding HTTP Service for bundle: [org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278]] 2025-08-16T04:20:11,063 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-08-16T04:20:11,064 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}", size=2} 2025-08-16T04:20:11,064 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-18,contextPath='/rests'} 2025-08-16T04:20:11,065 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@68bbf978{/rests,null,STOPPED} 2025-08-16T04:20:11,065 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@68bbf978{/rests,null,STOPPED} 2025-08-16T04:20:11,066 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:20:11,066 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /rests with 4 service(s) 2025-08-16T04:20:11,066 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-19,name='org.opendaylight.aaa.filterchain.filters.CustomFilterAdapter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-08-16T04:20:11,066 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:20:11,066 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:20:11,066 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/rests" with default Osgi Context OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}} 2025-08-16T04:20:11,067 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Initializing CustomFilterAdapter 2025-08-16T04:20:11,067 | INFO | paxweb-config-3-thread-1 | CustomFilterAdapter | 166 - org.opendaylight.aaa.filterchain - 0.21.0 | Injecting a new filter chain with 0 Filters: 2025-08-16T04:20:11,067 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@68bbf978{/rests,null,AVAILABLE} 2025-08-16T04:20:11,067 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-16,name='RESTCONF',path='/rests',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=311, osgi.http.whiteboard.context.name=RESTCONF, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/rests}}} as OSGi service for "/rests" context path 2025-08-16T04:20:11,068 | INFO | Blueprint Extender: 1 | WhiteboardWebServer | 176 - org.opendaylight.aaa.web.osgi-impl - 0.21.0 | Bundle org.opendaylight.netconf.restconf-server-jaxrs_9.0.0 [278] registered context path /.well-known with 3 service(s) 2025-08-16T04:20:11,068 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:20:11,068 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-20,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:20:11,068 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-20,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=2} 2025-08-16T04:20:11,068 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-21,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-21,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]}", size=1} 2025-08-16T04:20:11,069 | INFO | Blueprint Extender: 1 | YangLibraryWriterSingleton | 291 - org.opendaylight.netconf.yanglib-mdsal-writer - 9.0.0 | Binding URL provider org.opendaylight.restconf.server.jaxrs.JaxRsYangLibrary@40551489 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-21,name='org.glassfish.jersey.servlet.ServletContainer',urlPatterns=[/*],contexts=[{WB,OCM-16,RESTCONF,/rests}]} 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | ServerModel | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Created new ServletContextModel{id=ServletContextModel-27,contextPath='/.well-known'} 2025-08-16T04:20:11,069 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of OsgiContextModel{WB,id=OCM-22,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}", size=2} 2025-08-16T04:20:11,070 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Creating new Jetty context for ServletContextModel{id=ServletContextModel-27,contextPath='/.well-known'} 2025-08-16T04:20:11,070 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding OsgiContextModel{WB,id=OCM-22,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} to o.o.p.w.s.j.i.PaxWebServletContextHandler@252cfb43{/.well-known,null,STOPPED} 2025-08-16T04:20:11,070 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing default OSGi context model for o.o.p.w.s.j.i.PaxWebServletContextHandler@252cfb43{/.well-known,null,STOPPED} 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-22,WellKnownURIs,/.well-known}]} 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of FilterModel{id=FilterModel-25,name='org.opendaylight.aaa.shiro.filters.AAAShiroFilter',urlPatterns=[/*],contexts=[{WB,OCM-22,WellKnownURIs,/.well-known}]}", size=2} 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /auth 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /rests 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context /.well-known 2025-08-16T04:20:11,071 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Starting Jetty context "/.well-known" with default Osgi Context OsgiContextModel{WB,id=OCM-22,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}} 2025-08-16T04:20:11,072 | INFO | paxweb-config-3-thread-1 | ContextHandler | 139 - org.eclipse.jetty.util - 9.4.57.v20241219 | Started o.o.p.w.s.j.i.PaxWebServletContextHandler@252cfb43{/.well-known,null,AVAILABLE} 2025-08-16T04:20:11,072 | INFO | paxweb-config-3-thread-1 | OsgiServletContext | 397 - org.ops4j.pax.web.pax-web-spi - 8.0.30 | Registering OsgiServletContext{model=OsgiContextModel{WB,id=OCM-22,name='WellKnownURIs',path='/.well-known',bundle=org.opendaylight.netconf.restconf-server-jaxrs,ref={org.osgi.service.http.context.ServletContextHelper}={service.id=315, osgi.http.whiteboard.context.name=WellKnownURIs, service.bundleid=278, service.scope=singleton, osgi.http.whiteboard.context.path=/.well-known}}} as OSGi service for "/.well-known" context path 2025-08-16T04:20:11,072 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Changing filter configuration for context / 2025-08-16T04:20:11,073 | INFO | paxweb-config-3-thread-1 | HttpServiceEnabled | 396 - org.ops4j.pax.web.pax-web-runtime - 8.0.30 | Registering ServletModel{id=ServletModel-26,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-22,WellKnownURIs,/.well-known}]} 2025-08-16T04:20:11,073 | INFO | paxweb-config-3-thread-1 | JettyServerController | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Receiving Batch{"Registration of ServletModel{id=ServletModel-26,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-22,WellKnownURIs,/.well-known}]}", size=1} 2025-08-16T04:20:11,073 | INFO | paxweb-config-3-thread-1 | JettyServerWrapper | 394 - org.ops4j.pax.web.pax-web-jetty - 8.0.30 | Adding servlet ServletModel{id=ServletModel-26,name='Rootfound',urlPatterns=[/*],contexts=[{WB,OCM-22,WellKnownURIs,/.well-known}]} 2025-08-16T04:20:11,094 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4AddressNoZone 2025-08-16T04:20:11,094 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv4Prefix 2025-08-16T04:20:11,095 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6AddressNoZone 2025-08-16T04:20:11,095 | INFO | Blueprint Extender: 1 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.inet.types.rev130715.Ipv6Prefix 2025-08-16T04:20:11,118 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with service class org.opendaylight.restconf.server.mdsal.MdsalRestconfServer 2025-08-16T04:20:11,118 | INFO | Blueprint Extender: 1 | RestconfTransportChannelListener | 276 - org.opendaylight.netconf.restconf-server - 9.0.0 | Initialized with base path: /restconf, default encoding: JSON, default pretty print: false 2025-08-16T04:20:11,158 | INFO | Blueprint Extender: 1 | OSGiNorthbound | 275 - org.opendaylight.netconf.restconf-nb - 9.0.0 | Global RESTCONF northbound pools started 2025-08-16T04:20:11,160 | INFO | Blueprint Extender: 1 | BlueprintContainerImpl | 79 - org.apache.aries.blueprint.core - 1.10.3 | Blueprint bundle org.opendaylight.aaa.shiro/0.21.0 has been started 2025-08-16T04:20:11,162 | INFO | Blueprint Event Dispatcher: 1 | BlueprintBundleTracker | 180 - org.opendaylight.controller.blueprint - 11.0.0 | Blueprint container for bundle org.opendaylight.aaa.shiro_0.21.0 [172] was successfully created 2025-08-16T04:20:11,455 | INFO | SystemReadyService-0 | KarafSystemReady | 202 - org.opendaylight.infrautils.ready-impl - 7.1.4 | checkBundleDiagInfos: Elapsed time 6s, remaining time 293s, diag: Active {INSTALLED=0, RESOLVED=10, UNKNOWN=0, GRACE_PERIOD=0, WAITING=0, STARTING=0, ACTIVE=399, STOPPING=0, FAILURE=0} 2025-08-16T04:20:11,456 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | System ready; AKA: Aye captain, all warp coils are now operating at peak efficiency! [M.] 2025-08-16T04:20:11,456 | INFO | SystemReadyService-0 | SimpleSystemReadyMonitor | 201 - org.opendaylight.infrautils.ready-api - 7.1.4 | Now notifying all its registered SystemReadyListeners... 2025-08-16T04:20:11,456 | INFO | SystemReadyService-0 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | onSystemBootReady() received, starting the switch connections 2025-08-16T04:20:11,533 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T04:20:11,584 | INFO | epollEventLoopGroup-2-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T04:20:11,586 | INFO | epollEventLoopGroup-2-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6653 2025-08-16T04:20:11,586 | INFO | epollEventLoopGroup-2-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@71433cd1 started 2025-08-16T04:20:11,593 | INFO | epollEventLoopGroup-4-1 | TcpServerFacade | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Switch listener started and ready to accept incoming TCP/TLS connections on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T04:20:11,593 | INFO | epollEventLoopGroup-4-1 | SwitchConnectionProviderImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Started TCP connection on /[0:0:0:0:0:0:0:0]:6633 2025-08-16T04:20:11,593 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Connection provider org.opendaylight.openflowjava.protocol.impl.core.SwitchConnectionProviderImpl@377d05a8 started 2025-08-16T04:20:11,594 | INFO | epollEventLoopGroup-4-1 | OpenFlowPluginProviderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | All switchConnectionProviders are up and running (2). 2025-08-16T04:20:20,496 | INFO | qtp273192795-157 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication is now enabled 2025-08-16T04:20:20,496 | INFO | qtp273192795-157 | AuthenticationManager | 174 - org.opendaylight.aaa.tokenauthrealm - 0.21.0 | Authentication Manager activated 2025-08-16T04:20:20,901 | INFO | qtp273192795-393 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T04:20:20,904 | INFO | qtp273192795-393 | JaxRsRestconf | 278 - org.opendaylight.netconf.restconf-server-jaxrs - 9.0.0 | RESTCONF data-missing condition is reported as HTTP status 409 (RFC8040) 2025-08-16T04:20:21,141 | INFO | qtp273192795-393 | ApiPathParser | 273 - org.opendaylight.netconf.restconf-api - 9.0.0 | Consecutive slashes in REST URLs will be rejected 2025-08-16T04:20:23,004 | INFO | sshd-SshServer[7be65bf8](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:42850 authenticated 2025-08-16T04:20:23,578 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Data Recovery After Follower Node2 Restart 2025-08-16T04:26:15,355 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Start Mininet Again Connect To Follower Node2 2025-08-16T04:26:18,074 | INFO | epollEventLoopGroup-5-1 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.170.99:37920, NodeId:null 2025-08-16T04:26:18,119 | INFO | epollEventLoopGroup-5-2 | ConnectionAdapterImpl | 318 - org.opendaylight.openflowplugin.openflowjava.openflow-protocol-impl - 0.20.0 | Hello received 2025-08-16T04:26:18,587 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify Flows In Switch After Follower Node2 Restart 2025-08-16T04:26:18,620 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connected. 2025-08-16T04:26:18,620 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | No context chain found for device: openflow:1, creating new. 2025-08-16T04:26:18,620 | INFO | epollEventLoopGroup-5-2 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Device connected to controller, Device:/10.30.170.99:37924, NodeId:Uri{value=openflow:1} 2025-08-16T04:26:18,641 | INFO | epollEventLoopGroup-5-2 | RoleContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started timer for setting SLAVE role on device openflow:1 if no role will be set in 20s. 2025-08-16T04:26:18,771 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T04:26:19,009 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_GRANTED [wasOwner=false, isOwner=true, hasOwner=true] 2025-08-16T04:26:19,010 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting DeviceContextImpl[NEW] service for node openflow:1 2025-08-16T04:26:19,021 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RpcContextImpl[NEW] service for node openflow:1 2025-08-16T04:26:19,044 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting StatisticsContextImpl[NEW] service for node openflow:1 2025-08-16T04:26:19,044 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting RoleContextImpl[NEW] service for node openflow:1 2025-08-16T04:26:19,046 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | SetRole called with input:SetRoleInput{controllerRole=BECOMEMASTER, node=NodeRef{value=DataObjectIdentifier[ @ urn.opendaylight.inventory.rev130819.Nodes ... nodes.Node[NodeKey{id=Uri{value=openflow:1}}] ]}} 2025-08-16T04:26:19,047 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Requesting state change to BECOMEMASTER 2025-08-16T04:26:19,047 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | SalRoleRpc | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | RoleChangeTask called on device:openflow:1 OFPRole:BECOMEMASTER 2025-08-16T04:26:19,047 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | getGenerationIdFromDevice called for device: openflow:1 2025-08-16T04:26:19,050 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Started clustering services for node openflow:1 2025-08-16T04:26:19,058 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange called for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T04:26:19,060 | INFO | epollEventLoopGroup-5-2 | RoleService | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | submitRoleChange onSuccess for device:Uri{value=openflow:1}, role:BECOMEMASTER 2025-08-16T04:26:19,065 | INFO | ofppool-0 | FlowNodeReconciliationImpl | 299 - org.opendaylight.openflowplugin.applications.forwardingrules-manager - 0.20.0 | Triggering reconciliation for device NodeKey{id=Uri{value=openflow:1}} 2025-08-16T04:26:19,075 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:26:19,075 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: resolving connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:26:19,076 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: replaced connection ConnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 704.9 μs 2025-08-16T04:26:19,083 | INFO | pool-20-thread-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 connection is enabled by reconciliation framework. 2025-08-16T04:26:19,107 | INFO | epollEventLoopGroup-5-2 | DeviceInitializationUtil | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | IP address of the node openflow:1 is: IpAddress{ipv4Address=Ipv4Address{value=10.30.170.99}} 2025-08-16T04:26:19,107 | INFO | epollEventLoopGroup-5-2 | DeviceInitializationUtil | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Port number of the node openflow:1 is: 37924 2025-08-16T04:26:19,252 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPMETERFEATURES collected 2025-08-16T04:26:19,256 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPGROUPFEATURES collected 2025-08-16T04:26:19,270 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.MacAddress 2025-08-16T04:26:19,271 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.PhysAddress 2025-08-16T04:26:19,272 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.HexString 2025-08-16T04:26:19,272 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.DottedQuad 2025-08-16T04:26:19,272 | INFO | epollEventLoopGroup-5-2 | StringValueObjectFactory | 331 - org.opendaylight.yangtools.binding-reflect - 14.0.14 | Instantiated factory for class org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.Uuid 2025-08-16T04:26:19,274 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 info: OFPMPPORTDESC collected 2025-08-16T04:26:19,297 | INFO | epollEventLoopGroup-5-2 | OF13DeviceInitializer | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Static node openflow:1 successfully finished collecting 2025-08-16T04:26:19,347 | INFO | pool-20-thread-1 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 is able to work as master 2025-08-16T04:26:19,348 | INFO | pool-20-thread-1 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Role MASTER was granted to device openflow:1 2025-08-16T04:26:19,349 | INFO | pool-20-thread-1 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Publishing node added notification for Uri{value=openflow:1} 2025-08-16T04:26:19,350 | INFO | pool-20-thread-1 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Starting statistics gathering for node openflow:1 2025-08-16T04:26:19,376 | INFO | opendaylight-cluster-data-notification-dispatcher-55 | ConnectionManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Clearing the device connection timer for the device 1 2025-08-16T04:26:19,380 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | LazyBindingMap | 325 - org.opendaylight.yangtools.binding-data-codec-dynamic - 14.0.14 | Using lazy population for maps larger than 1 element(s) 2025-08-16T04:26:19,394 | INFO | opendaylight-cluster-data-notification-dispatcher-57 | StaticConfiguration | 204 - org.opendaylight.mdsal.binding-dom-adapter - 14.0.13 | Binding-over-DOM codec shortcuts are enabled 2025-08-16T04:26:31,572 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=1, txSequence=4a, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-2-txn-11-1, sequence=1}} found, ignoring response 2025-08-16T04:26:58,957 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=1, txSequence=ca, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-2-txn-20-1, sequence=1}} found, ignoring response 2025-08-16T04:27:44,466 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | TransmitQueue | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | No request matching Envelope{sessionId=1, txSequence=189, message=ModifyTransactionSuccess{target=member-3-datastore-operational-fe-2-txn-35-1, sequence=1}} found, ignoring response 2025-08-16T04:28:01,012 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Stop Mininet Connected To Follower Node2 2025-08-16T04:28:01,154 | INFO | epollEventLoopGroup-5-2 | SystemNotificationsListenerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | ConnectionEvent: Connection closed by device, Device:/10.30.170.99:37924, NodeId:openflow:1 2025-08-16T04:28:01,155 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Device openflow:1 disconnected. 2025-08-16T04:28:01,155 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-08-16T04:28:01,159 | INFO | epollEventLoopGroup-5-2 | DeviceManagerImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Publishing node removed notification for Uri{value=openflow:1} 2025-08-16T04:28:01,161 | INFO | epollEventLoopGroup-5-2 | ReconciliationManagerImpl | 302 - org.opendaylight.openflowplugin.applications.reconciliation-framework - 0.20.0 | Stopping reconciliation for node Uri{value=openflow:1} 2025-08-16T04:28:01,162 | INFO | epollEventLoopGroup-5-2 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Role SLAVE was granted to device openflow:1 2025-08-16T04:28:01,163 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RoleContextImpl[RUNNING] service for node openflow:1 2025-08-16T04:28:01,167 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping StatisticsContextImpl[RUNNING] service for node openflow:1 2025-08-16T04:28:01,167 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-08-16T04:28:01,169 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping RpcContextImpl[RUNNING] service for node openflow:1 2025-08-16T04:28:01,169 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping DeviceContextImpl[RUNNING] service for node openflow:1 2025-08-16T04:28:01,172 | INFO | ofppool-1 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services for node openflow:1 2025-08-16T04:28:01,173 | INFO | epollEventLoopGroup-5-2 | ContextChainImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Closed clustering services registration for node openflow:1 2025-08-16T04:28:01,173 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating DeviceContextImpl[TERMINATED] service for node openflow:1 2025-08-16T04:28:01,174 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RpcContextImpl[TERMINATED] service for node openflow:1 2025-08-16T04:28:01,174 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating StatisticsContextImpl[TERMINATED] service for node openflow:1 2025-08-16T04:28:01,174 | INFO | epollEventLoopGroup-5-2 | StatisticsContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Stopping running statistics gathering for node openflow:1 2025-08-16T04:28:01,174 | INFO | epollEventLoopGroup-5-2 | GuardedContextImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Terminating RoleContextImpl[TERMINATED] service for node openflow:1 2025-08-16T04:28:01,359 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | DeviceOwnershipServiceImpl | 298 - org.opendaylight.openflowplugin.applications.device-ownership-service - 0.20.0 | Entity ownership change received for node : openflow:1 : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-08-16T04:28:01,359 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Entity ownership change received for node : LOCAL_OWNERSHIP_LOST_NO_OWNER [wasOwner=true, isOwner=false, hasOwner=false] 2025-08-16T04:28:01,865 | INFO | node-cleaner-0 | ContextChainHolderImpl | 309 - org.opendaylight.openflowplugin.impl - 0.20.0 | Try to remove device openflow:1 from operational DS 2025-08-16T04:28:03,696 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node 2" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Delete All Flows From Follower Node 2 2025-08-16T04:28:04,172 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Follower Node2 Restart" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster HA Data Recovery BulkFlow Single Switch.Verify No Flows In Cluster After Follower Node2 Restart 2025-08-16T04:28:06,908 | INFO | sshd-SshServer[7be65bf8](port=8101)-nio2-thread-2 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Session karaf@/10.30.171.78:46360 authenticated 2025-08-16T04:28:07,432 | INFO | pipe-log:log "ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/040__Cluster_Current_Term_Verification_3Node_Cluster.robot" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting suite /w/workspace/openflowplugin-csit-3node-clustering-bulkomatic-only-titanium/test/csit/suites/openflowplugin/Clustering_Bulkomatic/040__Cluster_Current_Term_Verification_3Node_Cluster.robot 2025-08-16T04:28:07,791 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Check Shard And Get Inventory" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Check Shard And Get Inventory 2025-08-16T04:28:12,669 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Initial Current Term Verification" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Initial Current Term Verification 2025-08-16T04:28:13,061 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Add Bulk Flow From Follower" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Add Bulk Flow From Follower 2025-08-16T04:28:14,391 | WARN | opendaylight-cluster-data-shard-dispatcher-38 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.lang.UnsupportedOperationException: null at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] 2025-08-16T04:28:14,399 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | null 2025-08-16T04:29:44,490 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:29:44,493 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: refreshing backend for shard 1 2025-08-16T04:29:44,499 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolved shard 1 to ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent} 2025-08-16T04:29:44,499 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: resolving connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} to ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:29:44,500 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-operational: replaced connection ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} with ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-operational, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-operational/member-2-shard-inventory-operational#1155012946], sessionId=2, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} in 827.0 μs 2025-08-16T04:31:15,709 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate): Starting new election term 5 2025-08-16T04:31:15,712 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Follower) :- Switching from behavior Follower to Candidate, election term: 5 2025-08-16T04:31:15,713 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@127bea0d 2025-08-16T04:31:15,713 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from Follower to Candidate 2025-08-16T04:31:15,714 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: connection ConnectedClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} reconnecting as ReconnectingClientConnection{client=ClientIdentifier{frontend=member-3-frontend-datastore-config, generation=2}, cookie=1, backend=ShardBackendInfo{actor=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/user/shardmanager-config/member-2-shard-inventory-config#1198016831], sessionId=1, version=POTASSIUM, maxMessages=1000, cookie=1, shard=inventory, dataTree=absent}} 2025-08-16T04:31:15,715 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | ClientActorBehavior | 182 - org.opendaylight.controller.cds-access-client - 11.0.0 | member-3-frontend-datastore-config: refreshing backend for shard 1 2025-08-16T04:31:15,715 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from Follower to Candidate 2025-08-16T04:31:15,727 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Candidate | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate): LastApplied index 22 is behind last index 23 - switching to PreLeader 2025-08-16T04:31:15,734 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | RaftActorBehavior | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (Candidate) :- Switching from behavior Candidate to PreLeader, election term: 5 2025-08-16T04:31:15,734 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received LeaderStateChanged message: org.opendaylight.controller.cluster.datastore.messages.ShardLeaderStateChanged@360502e 2025-08-16T04:31:15,737 | INFO | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | RoleChangeNotifier | 194 - org.opendaylight.controller.sal-clustering-commons - 11.0.0 | RoleChangeNotifier for member-3-shard-inventory-config , received role change from Candidate to PreLeader 2025-08-16T04:31:15,737 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | shard-manager-config: Received role changed for member-3-shard-inventory-config from Candidate to PreLeader 2025-08-16T04:31:15,739 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-08-16T04:31:15,740 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-08-16T04:31:15,741 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (PreLeader): handleAppendEntriesReply: follower member-2-shard-inventory-config lastIndex 25 is ahead of our lastIndex 24 (snapshotIndex 21, snapshotTerm 4) - forcing install snaphot 2025-08-16T04:31:15,745 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Initiating snapshot capture CaptureSnapshot [lastAppliedIndex=22, lastAppliedTerm=4, lastIndex=24, lastTerm=5, installSnapshotInitiated=, replicatedToAllIndex=-1, replicatedToAllTerm=-1, unAppliedEntries size=2, mandatoryTrim=false] to install on member-2-shard-inventory-config 2025-08-16T04:31:15,747 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (PreLeader): handleAppendEntriesReply: follower member-2-shard-inventory-config lastIndex 25 is ahead of our lastIndex 24 (snapshotIndex 21, snapshotTerm 4) - forcing install snaphot 2025-08-16T04:31:15,748 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:15,750 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Persising snapshot at EntryInfo[index=22, term=4]/EntryInfo[index=24, term=5] 2025-08-16T04:31:15,752 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: Removed in-memory snapshotted entries, adjusted snapshotIndex: 21 and term: 4 2025-08-16T04:31:15,754 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | SnapshotManager | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config: snapshot is durable as of 2025-08-16T04:31:15.751196089Z 2025-08-16T04:31:15,755 | WARN | opendaylight-cluster-data-shard-dispatcher-37 | ShardManager | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Supervisor Strategy caught unexpected exception - resuming java.util.concurrent.CancellationException: Previous action failed at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.newCancellationWithCause(JournalWriteTask.java:459) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.abortAndFailAction(JournalWriteTask.java:440) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:402) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.run(JournalWriteTask.java:305) ~[bundleFile:11.0.0] at java.lang.VirtualThread.run(VirtualThread.java:329) ~[?:?] Caused by: java.lang.UnsupportedOperationException at org.opendaylight.raft.journal.MappedByteBuf.getBytes(MappedByteBuf.java:363) ~[bundleFile:11.0.0] at io.netty.buffer.AbstractUnpooledSlicedByteBuf.getBytes(AbstractUnpooledSlicedByteBuf.java:397) ~[bundleFile:4.2.2.Final] at org.opendaylight.raft.spi.BufThenFileOutputStream.switchToFile(BufThenFileOutputStream.java:144) ~[bundleFile:11.0.0] at org.opendaylight.raft.spi.BufThenFileOutputStream.write(BufThenFileOutputStream.java:111) ~[bundleFile:11.0.0] at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1875) ~[?:?] at java.io.ObjectOutputStream.write(ObjectOutputStream.java:725) ~[?:?] at org.opendaylight.raft.spi.ChunkedByteArray.copyTo(ChunkedByteArray.java:62) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.datastore.persisted.CT.writeExternal(CT.java:40) ~[bundleFile:?] at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1478) ~[?:?] at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1449) ~[?:?] at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1194) ~[?:?] at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:358) ~[?:?] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:245) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1$LogEntryWriter.objectToBytes(EntryJournalV1.java:209) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentWriter.append(SegmentWriter.java:90) ~[bundleFile:11.0.0] at org.opendaylight.raft.journal.SegmentedEntryWriter.append(SegmentedEntryWriter.java:52) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.EntryJournalV1.appendEntry(EntryJournalV1.java:480) ~[bundleFile:11.0.0] at org.opendaylight.controller.cluster.raft.spi.JournalWriteTask.runBatch(JournalWriteTask.java:385) ~[bundleFile:11.0.0] ... 2 more 2025-08-16T04:31:15,756 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | OneForOneStrategy | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | Previous action failed 2025-08-16T04:31:15,771 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | AbstractLeader | 190 - org.opendaylight.controller.sal-akka-raft - 11.0.0 | member-3-shard-inventory-config (PreLeader): Snapshot successfully installed on follower member-2-shard-inventory-config (last chunk 1) - matchIndex set to 22, nextIndex set to 23 2025-08-16T04:31:16,777 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:17,798 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:18,818 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:19,837 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:20,859 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:21,878 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:22,898 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:23,917 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:24,938 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:25,958 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:26,978 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:27,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:29,019 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:30,037 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:31,059 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:32,077 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:33,097 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:34,118 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:35,138 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:35,732 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:31:36,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$D], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:37,178 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$E], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:38,198 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$F], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:39,218 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$G], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:40,239 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$H], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:41,258 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$I], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:42,278 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$J], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:43,299 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$K], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:44,320 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$L], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:45,339 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$M], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:46,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$N], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:47,378 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$O], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:48,399 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$P], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:49,419 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:50,438 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$R], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:51,459 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$S], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:52,479 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$T], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:53,499 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$U], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:54,519 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$V], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:55,539 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$W], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:56,560 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$X], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:56,759 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:31:57,579 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:58,599 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:31:59,623 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:00,650 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:01,670 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:02,690 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:03,710 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:04,730 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:05,749 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:06,770 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:07,790 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:08,811 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:09,830 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:10,850 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:11,871 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:12,890 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:13,910 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:14,930 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:15,950 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:16,971 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:17,799 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:32:17,990 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:19,010 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:20,030 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:21,051 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:22,070 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:23,090 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:24,111 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:25,132 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:26,161 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:27,181 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:28,200 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:29,221 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:30,241 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:31,261 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:32,281 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:33,300 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:34,320 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:35,341 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:36,362 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:37,381 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:38,401 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:38,839 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:32:39,421 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:40,442 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:41,462 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:42,482 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:43,502 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:44,522 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:45,542 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:46,562 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:47,582 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:48,602 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:49,622 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:50,642 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:51,662 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:52,683 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:53,702 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:54,722 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:55,742 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:56,762 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:57,782 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:58,802 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:59,822 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:32:59,869 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:33:00,843 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:01,863 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:02,882 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:03,902 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:04,922 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:05,942 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:06,961 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:07,982 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:09,003 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:10,023 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:11,043 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:12,062 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:13,082 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:14,103 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:15,123 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:16,143 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:17,164 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:18,183 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:19,203 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:20,222 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:20,909 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:33:21,243 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:22,263 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:23,283 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:24,304 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:25,324 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:26,344 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:27,375 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:28,404 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:29,423 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:30,443 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:31,464 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:32,484 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:33,504 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:34,524 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:35,543 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:36,563 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:37,584 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:38,604 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:39,624 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:40,643 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:41,664 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:41,950 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:33:42,684 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:43,704 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:44,724 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:45,745 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:46,764 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:47,785 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:48,805 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:49,825 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:50,855 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:51,875 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:52,894 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:53,914 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:54,935 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:55,954 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:56,975 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:57,995 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:33:59,015 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:00,035 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:01,055 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:02,074 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:02,989 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:34:03,095 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:04,114 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:05,135 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:06,155 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:07,175 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:08,194 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:09,215 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:10,234 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:11,257 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:12,275 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:13,295 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:14,315 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:15,335 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:16,355 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:17,375 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:18,396 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:19,418 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:20,437 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:21,456 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:22,476 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:23,496 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:24,029 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:34:24,516 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:25,536 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:26,559 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:27,575 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:28,596 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:29,616 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:30,636 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:31,656 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:32,676 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:33,696 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:34,716 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:35,736 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:36,756 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:37,776 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:38,796 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:39,816 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:40,836 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:41,856 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:42,876 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:43,897 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:44,916 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:45,069 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:34:45,937 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:46,956 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:47,976 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:48,997 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:50,016 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:51,036 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:52,057 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:53,077 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:54,097 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:54,196 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Get Bulk Flows And Verify In Cluster" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Get Bulk Flows And Verify In Cluster 2025-08-16T04:34:54,321 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:55,117 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:55,339 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:56,137 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:56,358 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:57,157 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:57,378 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:58,177 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:58,399 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:59,197 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:34:59,419 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:00,218 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:00,438 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:01,237 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:01,458 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:02,257 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:02,479 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:03,277 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:03,499 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:04,297 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:04,518 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:05,317 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:05,539 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:06,109 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:35:06,337 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:06,558 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:07,357 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:07,578 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:08,377 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:08,599 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:09,397 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:09,618 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:10,417 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:10,638 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:11,437 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:11,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$C], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:12,457 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:12,679 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$D], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:13,477 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:13,699 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$E], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:14,498 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:14,719 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$F], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:15,518 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:15,738 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$G], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:16,537 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:16,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$H], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:17,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:17,779 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$I], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:18,578 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:18,798 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$J], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:19,598 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:19,819 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$K], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:20,617 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:20,839 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$L], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:21,639 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:21,859 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$M], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:22,659 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:22,879 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$N], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:23,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:23,899 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$O], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:24,698 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:24,918 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$P], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:25,719 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:25,938 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:26,739 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:26,958 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$R], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:27,150 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:35:27,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:27,979 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$S], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:28,778 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:28,999 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$T], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:29,798 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:30,019 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$U], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:30,818 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$de], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:31,038 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$V], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:31,837 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:32,059 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$W], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:32,858 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:33,079 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$X], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:33,878 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:34,099 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:34,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$he], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:35,119 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:35,917 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:36,139 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:36,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:37,159 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:37,959 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:38,179 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:38,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:39,199 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:39,999 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:40,219 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:41,019 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:41,240 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:42,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:42,258 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:43,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:43,278 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:44,078 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:44,299 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:45,098 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:45,319 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:46,118 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:46,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:47,139 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:47,359 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:48,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:48,189 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:35:48,379 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:49,178 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:49,398 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:50,198 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$we], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:50,419 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:51,218 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:51,439 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:52,239 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:52,459 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:53,258 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:53,478 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:54,279 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:54,499 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:55,299 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:55,519 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:56,319 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:56,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:57,338 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$De], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:57,558 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:58,359 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:58,579 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:59,379 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:35:59,599 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:00,399 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:00,619 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:01,418 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$He], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:01,638 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:02,439 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:02,659 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:03,458 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:03,678 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:04,479 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:04,698 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:05,499 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:05,719 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:06,519 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:06,738 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:07,539 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:07,759 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:08,559 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:08,779 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:09,229 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:36:09,579 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:09,799 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:10,599 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:10,819 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:11,620 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:11,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:12,641 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:12,858 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:13,659 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:13,878 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:14,679 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:14,898 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ab], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:15,699 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:15,919 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:16,719 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$We], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:16,938 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:17,739 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:17,959 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Db], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:18,760 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:18,978 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:19,780 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:19,998 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:20,800 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:21,019 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:21,819 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:22,038 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:22,840 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:23,059 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ib], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:23,859 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:24,078 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:24,880 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:25,098 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:25,899 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:26,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:26,919 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:27,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:27,940 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:28,158 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:28,960 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:29,178 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ob], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:29,980 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:30,199 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:30,269 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:36:30,999 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:31,218 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:32,020 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:32,239 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:33,040 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:33,258 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:34,059 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:34,278 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:35,080 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:35,299 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ub], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:36,099 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:36,319 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:37,120 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:37,338 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:38,140 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:38,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:39,160 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:39,378 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:40,180 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:40,398 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zb], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:41,200 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$if], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:41,418 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:42,220 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:42,438 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:43,240 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:43,458 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:44,260 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:44,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:45,280 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:45,498 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:46,300 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:46,518 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:47,320 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:47,538 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:48,341 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:48,558 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:49,360 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:49,579 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:50,380 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:50,599 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:51,299 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:36:51,400 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:51,619 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:52,420 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:52,638 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~b], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:53,441 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:53,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:54,461 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:54,678 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:55,481 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:55,699 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:56,501 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:56,718 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:57,521 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:57,738 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:58,540 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:58,759 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:59,560 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:36:59,779 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:00,580 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:00,799 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:01,600 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:01,820 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:02,620 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:02,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:03,640 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:03,858 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:04,660 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:04,878 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:05,681 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:05,898 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:06,701 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:06,919 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:07,721 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$If], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:07,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:08,740 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:08,958 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:09,760 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:09,979 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:10,781 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:10,998 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:11,801 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:12,018 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:12,339 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:37:12,820 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:13,038 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:13,841 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:14,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:14,861 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:15,077 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:15,881 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:16,099 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:16,901 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:17,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:18,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:19,157 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:19,267 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:19,277 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2699 millis 2025-08-16T04:37:20,178 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ac], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:20,291 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:21,198 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:21,311 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:22,218 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:22,330 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:23,238 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:23,351 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:24,258 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ec], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:24,371 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:25,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:25,392 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:26,299 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:26,411 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:27,319 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:27,431 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:28,339 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ic], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:28,452 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:29,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:29,471 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:30,379 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:30,491 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:31,399 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:31,511 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:32,418 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:32,531 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:33,379 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:37:33,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:33,551 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:34,458 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:34,572 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:35,479 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:35,590 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:36,499 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:36,612 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:37,518 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:37,632 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:38,538 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:38,653 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:39,558 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:39,671 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:40,578 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:40,692 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:41,598 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:41,711 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:42,617 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:42,731 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:43,638 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:43,751 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:44,659 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:44,771 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:45,680 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zc], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:45,791 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:46,698 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:46,811 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:47,719 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:47,831 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:48,738 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:48,852 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:49,758 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:49,871 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:50,778 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:50,892 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:51,798 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:51,912 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:52,818 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:52,932 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:53,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:53,952 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:54,420 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:37:54,859 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:54,971 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:55,879 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:55,991 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:56,899 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:57,011 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:57,919 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~c], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:58,032 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:58,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:59,051 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:37:59,958 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:00,071 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:00,979 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:01,092 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:01,998 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:02,112 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:03,018 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:03,132 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:04,039 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:04,152 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:05,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:05,171 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:06,078 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:06,193 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:07,098 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:07,212 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:08,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:08,232 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:09,139 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:09,252 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:10,158 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:10,272 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:11,179 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:11,292 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:12,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:12,312 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:13,218 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:13,332 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:14,238 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:14,352 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:15,258 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:15,372 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:15,459 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:38:16,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:16,392 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:17,298 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:17,412 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:18,318 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:18,432 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:19,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:19,452 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:20,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:20,472 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:21,378 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:21,491 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:22,399 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:22,512 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:23,418 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:23,533 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:24,439 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:24,552 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:25,458 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ad], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:25,572 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:26,478 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:26,592 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:27,498 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:27,612 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:28,517 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:28,633 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:29,539 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ed], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:29,652 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:30,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:30,672 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:31,578 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:31,693 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:32,598 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:32,713 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:33,619 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Id], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:33,732 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:34,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:34,753 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:35,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:35,773 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:36,499 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:38:36,678 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ld], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:36,793 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:37,699 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Md], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:37,813 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:38,719 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:38,832 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:39,738 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Od], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:39,852 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:40,758 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:40,873 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:41,778 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:41,892 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:42,798 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:42,913 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:43,818 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:43,932 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:44,838 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Td], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:44,953 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:45,858 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ud], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:45,972 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:46,879 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:46,993 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:47,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:48,012 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:48,918 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:49,032 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:49,938 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:50,053 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:50,959 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zd], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:51,072 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:51,978 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:52,093 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:52,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:53,113 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:54,018 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:54,133 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:55,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:55,153 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:56,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:56,173 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:57,078 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:57,193 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:57,539 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:38:58,098 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:58,213 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:59,119 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:38:59,233 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:00,138 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:00,253 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:01,159 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:01,273 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:02,179 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:02,293 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:03,199 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~d], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:03,314 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:04,218 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:04,333 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:05,238 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:05,353 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:06,258 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:06,373 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:07,279 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$de], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:07,393 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:08,298 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:08,413 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:09,318 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:09,433 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:10,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:10,453 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:11,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$he], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:11,473 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:12,378 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:12,493 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:13,398 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:13,513 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:14,418 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:14,534 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:15,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:15,553 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:16,459 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:16,574 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:17,478 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:17,593 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:18,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:18,579 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:39:18,613 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:19,518 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:19,634 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:20,540 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:20,653 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:21,558 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:21,673 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:22,578 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:22,693 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:23,599 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:23,714 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:24,619 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:24,734 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:25,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:25,753 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:26,658 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$we], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:26,774 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:27,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:27,793 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:28,699 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:28,813 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:29,718 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:29,834 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:30,738 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ae], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:30,854 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:31,759 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Be], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:31,873 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:32,779 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ce], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:32,893 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:33,797 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$De], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:33,914 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:34,818 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ee], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:34,934 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:35,838 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:35,953 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:36,858 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ge], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:36,973 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:37,878 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$He], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:37,994 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:38,898 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ie], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:39,014 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:39,619 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:39:39,919 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Je], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:40,034 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:40,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ke], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:41,054 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:41,958 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Le], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:42,073 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:42,978 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Me], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:43,094 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:43,999 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ne], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:44,114 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:45,017 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:45,134 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:46,039 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:46,154 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:47,058 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:47,173 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:48,078 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Re], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:48,194 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:49,097 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Se], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:49,214 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:50,118 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Te], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:50,234 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:51,138 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ue], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:51,254 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:52,158 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ve], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:52,274 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:53,179 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$We], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:53,294 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:54,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xe], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:54,313 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:55,218 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ye], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:55,334 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:56,239 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ze], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:56,354 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:57,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:57,374 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:58,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:58,394 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:59,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:39:59,414 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:00,318 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:00,434 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:00,659 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:40:01,338 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:01,454 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:02,358 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:02,475 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:03,378 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:03,494 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:04,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:04,514 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:05,417 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:05,534 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:06,437 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:06,554 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:07,457 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:07,574 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:08,478 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~e], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:08,594 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:09,498 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:09,615 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:10,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:10,634 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:11,537 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:11,654 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:12,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:12,674 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:13,578 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:13,694 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:14,598 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:14,715 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:15,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:15,734 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:16,638 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:16,754 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:17,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$if], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:17,775 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:18,678 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:18,794 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:19,698 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:19,814 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:20,719 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:20,834 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:21,699 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:40:21,738 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:21,855 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:22,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:22,874 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:23,779 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:23,894 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:24,798 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:24,914 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:25,818 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:25,934 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:26,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:26,954 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:27,857 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:27,975 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:28,878 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:28,994 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:29,898 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:30,014 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:30,918 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:31,035 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:31,938 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:32,055 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:32,958 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:33,075 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:33,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:34,095 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:34,997 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:35,115 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:36,017 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Af], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:36,134 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:37,038 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:37,155 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:38,058 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:38,175 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:39,078 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Df], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:39,195 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:40,098 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ef], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:40,325 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:41,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ff], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:42,138 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:42,739 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:40:42,759 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 3318 millis 2025-08-16T04:40:42,761 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:43,158 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:43,784 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:44,179 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$If], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:44,805 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:45,198 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:46,219 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:46,352 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:47,238 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:48,258 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:48,648 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:49,278 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:50,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Of], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:51,283 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | PhiAccrualFailureDetector | 189 - org.opendaylight.controller.repackaged-pekko - 11.0.0 | heartbeat interval is growing too large for address pekko://opendaylight-cluster-data@10.30.170.62:2550: 2634 millis 2025-08-16T04:40:51,286 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:51,318 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:52,305 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:52,337 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:53,325 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:53,359 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:54,346 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:54,378 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:55,365 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:55,398 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:56,386 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:56,418 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:57,405 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:57,438 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:58,425 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:58,458 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:59,445 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:40:59,478 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:00,465 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:00,497 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:01,485 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:01,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zf], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:02,505 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:02,538 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:03,525 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:03,558 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:03,779 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:41:04,545 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:04,577 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:05,566 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:05,598 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:06,584 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:06,619 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:07,605 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:07,638 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:08,625 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:08,657 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:09,645 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:09,677 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:10,666 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:10,697 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:11,685 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:11,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:12,705 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:12,738 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:13,725 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:13,758 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~f], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:14,746 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:14,778 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:15,765 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:15,798 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:16,785 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:16,818 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:17,805 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:17,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:18,826 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:18,858 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:19,845 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:19,878 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:20,866 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:20,899 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:21,885 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:21,919 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:22,905 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:22,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:23,925 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:23,958 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:24,819 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:41:24,948 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:24,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:25,965 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:25,998 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:26,985 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:27,018 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:28,006 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:28,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:29,025 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:29,057 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:30,046 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:30,078 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:31,065 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:31,098 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:32,085 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:32,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:33,105 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:33,137 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:34,125 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:34,159 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:35,145 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:35,178 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:35,205 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Adding Bulk Flow" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Adding Bulk Flow 2025-08-16T04:41:35,682 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before And After Addition Of Flow" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before And After Addition Of Flow 2025-08-16T04:41:36,130 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete and Add ten percent of the flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete and Add ten percent of the flows for 5 iterations 2025-08-16T04:41:36,166 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:36,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:37,186 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:37,218 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:38,206 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:38,238 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:39,225 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:39,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:40,245 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:40,278 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:41,266 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:41,298 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ag], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:42,285 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:42,318 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:43,306 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:43,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:44,326 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:44,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:45,345 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:45,378 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:45,859 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:41:46,366 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:46,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:47,386 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:47,418 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:48,406 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:48,438 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:49,426 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:49,458 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ig], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:50,465 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:50,479 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:51,486 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:51,498 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:52,505 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:52,518 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:53,526 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:53,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:54,546 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:54,558 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ng], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:55,566 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:55,578 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Og], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:56,585 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:56,597 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:57,606 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:57,618 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:58,626 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:58,638 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:59,646 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:41:59,657 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:00,666 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:00,678 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:01,686 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:01,697 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ug], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:02,706 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:02,717 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:03,729 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:03,738 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:04,746 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:04,758 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:05,766 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:05,779 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:06,787 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:06,798 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zg], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:06,899 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:42:07,806 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:07,818 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:08,826 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:08,838 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:09,846 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:09,858 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:10,866 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:10,878 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:11,886 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:11,898 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:12,905 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:12,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:13,926 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:13,938 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:14,946 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:14,958 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:15,966 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:15,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:16,986 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:16,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:18,007 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:18,017 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:19,027 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:19,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~g], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:20,047 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:20,058 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:21,066 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:21,078 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:22,086 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:22,098 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:23,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:23,117 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:24,125 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:24,138 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:25,147 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:25,158 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:26,165 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:26,178 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:27,186 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:27,198 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:27,939 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:42:28,206 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:28,218 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:29,226 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:29,238 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:30,245 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:30,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:31,267 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:31,278 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:32,286 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:32,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:33,306 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:33,318 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:34,326 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:34,338 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:35,346 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:35,357 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:36,366 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:36,378 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:37,386 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:37,398 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:38,406 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:38,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:39,426 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:39,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:40,446 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:40,458 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:41,466 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:41,477 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:42,486 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:42,498 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:43,506 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:43,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:44,526 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:44,538 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:45,546 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:45,558 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:46,566 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:46,578 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ah], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:47,587 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:47,597 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:48,607 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:48,618 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ch], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:48,979 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:42:49,626 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:49,638 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:50,646 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:50,658 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:51,666 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:51,678 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:52,686 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:52,698 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:53,707 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:53,718 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:54,727 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:54,738 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ih], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:55,747 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:55,758 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:56,767 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:56,777 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:57,786 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:57,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:58,807 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:58,818 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:59,826 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:42:59,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:00,847 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:00,858 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:01,867 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:01,878 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ph], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:02,886 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:02,897 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:03,907 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:03,918 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:04,926 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:04,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:05,946 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:05,958 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Th], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:06,966 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:06,977 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:07,987 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:07,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:09,007 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:09,018 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:10,019 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:43:10,027 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:10,038 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:11,047 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:11,057 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:12,067 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:12,078 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zh], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:13,087 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:13,098 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:14,106 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:14,118 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:15,128 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$el], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:15,137 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:16,146 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:16,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:17,167 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:17,177 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:18,186 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:18,198 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:19,207 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:19,218 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:20,226 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:20,237 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:21,247 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:21,257 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:22,266 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:22,277 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:23,287 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:23,298 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:24,306 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:24,318 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~h], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:25,327 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:25,338 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:26,346 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:26,357 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:27,367 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:27,378 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:28,387 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:28,398 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:29,406 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:29,418 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:30,426 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:30,438 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:31,059 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:43:31,447 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:31,457 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:32,467 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:32,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:33,487 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:33,498 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:34,508 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:34,519 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:35,527 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:35,538 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:36,546 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:36,558 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:37,567 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:37,577 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:38,587 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:38,598 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:39,607 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:39,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:40,627 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:40,638 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:41,647 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$El], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:41,658 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:42,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:42,678 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:43,687 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:43,697 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:44,707 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:44,718 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:45,727 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:45,737 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:46,748 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:46,758 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:47,767 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:47,778 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:48,787 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:48,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:49,807 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:49,818 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:50,827 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:50,837 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:51,847 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:51,858 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ai], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:52,099 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:43:52,868 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:52,878 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:53,887 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:53,898 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ci], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:54,908 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:54,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Di], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:55,927 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:55,937 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ei], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:56,947 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:56,958 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:57,967 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:57,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:58,987 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:43:58,998 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:00,007 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:00,017 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ii], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:01,028 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:01,038 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ji], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:02,047 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:02,058 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ki], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:03,067 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:03,078 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Li], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:04,086 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:04,098 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:05,107 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:05,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ni], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:06,127 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:06,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:07,147 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:07,157 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:08,166 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:08,178 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:09,186 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:09,197 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ri], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:10,206 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:10,218 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Si], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:11,227 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:11,238 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ti], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:12,248 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:12,258 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ui], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:13,140 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:44:13,267 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:13,278 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:14,288 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:14,298 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:15,307 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:15,318 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:16,327 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:16,338 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:17,347 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:17,357 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zi], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:18,367 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:18,377 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:19,388 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:19,398 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:20,407 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:20,417 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:21,427 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:21,438 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:22,447 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:22,458 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:23,467 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:23,477 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:24,487 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:24,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:25,508 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:25,518 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:26,527 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:26,538 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:27,547 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:27,558 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:28,567 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:28,578 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:29,587 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:29,598 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~i], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:30,607 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:30,618 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:31,628 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:31,638 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:32,647 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:32,658 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:33,667 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:33,679 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:34,179 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:44:34,688 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:34,697 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:35,707 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:35,718 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:36,727 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:36,738 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:37,747 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:37,758 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:38,768 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:38,777 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:39,787 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:39,798 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:40,807 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:40,818 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:41,827 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:41,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:42,847 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:42,857 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:43,867 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:43,878 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:44,888 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:44,898 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:45,907 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:45,918 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:46,927 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:46,938 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:47,947 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:47,958 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:48,967 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:48,977 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:49,988 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:49,998 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:51,008 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:51,017 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:52,027 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:52,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:53,047 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:53,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:54,067 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:54,078 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:55,087 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:55,098 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:55,219 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:44:56,108 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:56,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:57,127 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:57,137 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:58,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:58,158 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:59,167 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:44:59,178 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:00,188 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:00,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:01,208 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:01,218 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ej], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:02,227 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:02,238 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:03,248 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:03,258 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:04,267 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:04,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:05,287 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:05,298 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ij], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:06,308 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:06,317 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:07,327 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:07,338 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:08,348 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:08,358 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:09,368 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:09,378 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:10,387 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:10,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:11,408 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:11,417 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:12,427 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:12,437 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:13,447 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:13,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:14,468 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:14,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:15,488 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:15,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:16,259 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:45:16,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:16,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:17,528 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:17,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:18,548 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:18,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:19,568 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:19,577 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:20,588 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:20,598 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:21,607 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$an], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:21,618 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:22,627 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:22,638 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zj], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:23,648 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:23,658 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:24,668 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:24,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:25,687 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$en], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:25,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:26,708 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:26,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:27,727 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:27,738 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:28,748 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:28,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:29,767 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$in], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:29,777 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:30,788 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:30,798 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:31,809 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:31,818 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:32,827 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:32,837 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:33,848 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:33,858 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:34,867 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:34,878 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~j], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:35,887 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$on], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:35,898 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:36,908 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:36,918 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:37,299 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:45:37,928 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:37,938 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:38,948 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:38,958 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:39,968 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:39,978 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:40,987 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:40,998 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:42,008 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:42,018 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:43,027 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:43,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:44,048 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:44,058 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:45,068 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:45,078 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:46,088 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:46,097 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:47,108 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:47,118 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:48,128 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$An], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:48,137 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:49,147 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:49,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:50,168 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:50,178 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:51,188 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:51,198 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:52,208 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$En], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:52,218 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:53,229 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:53,237 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:54,248 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:54,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:55,267 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:55,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:56,288 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$In], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:56,298 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:57,307 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:57,317 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:58,327 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:58,337 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:58,339 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:45:59,347 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:45:59,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:00,367 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:00,377 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:01,388 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:01,398 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:02,407 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$On], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:02,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ak], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:03,429 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:03,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:04,448 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:04,457 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ck], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:05,467 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:05,477 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:06,487 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:06,497 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ek], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:07,508 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:07,518 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:08,527 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:08,538 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:09,547 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:09,558 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:10,568 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:10,578 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ik], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:11,588 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:11,598 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:12,607 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:12,618 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:13,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:13,637 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:14,647 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:14,658 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:15,668 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:15,678 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:16,687 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:16,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ok], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:17,708 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:17,717 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:18,727 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:18,737 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:19,379 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:46:19,748 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:19,758 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:20,768 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:20,778 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:21,788 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:21,797 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:22,808 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:22,818 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:23,827 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:23,837 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:24,849 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:24,858 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:25,868 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:25,878 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:26,888 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:26,897 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:27,908 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:27,918 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zk], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:28,928 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:28,938 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:29,948 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:29,958 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:30,967 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:30,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:31,988 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:31,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:33,008 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:33,018 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:34,028 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:34,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:35,048 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:35,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:36,068 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:36,077 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:37,088 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:37,097 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:38,108 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:38,118 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:39,127 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:39,137 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:40,148 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$no], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:40,158 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~k], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:40,419 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:46:41,168 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:41,177 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:42,188 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:42,198 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:43,208 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:43,218 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:44,227 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:44,238 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:45,248 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$so], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:45,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$el], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:46,269 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$to], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:46,278 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:47,288 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:47,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:48,308 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:48,318 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:49,328 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:49,338 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:50,347 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:50,357 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:51,368 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:51,379 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:52,388 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:52,399 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:53,408 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:53,418 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:54,428 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:54,438 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:55,448 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:55,458 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:56,468 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:56,478 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:57,488 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:57,497 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:58,508 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:58,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:59,527 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:46:59,537 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:00,548 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:00,558 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:01,459 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:47:01,568 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:01,578 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:02,588 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:02,598 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:03,608 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:03,617 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:04,628 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:04,637 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:05,649 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:05,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:06,668 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$No], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:06,677 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:07,688 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:07,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Al], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:08,708 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:08,718 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:09,728 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:09,738 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:10,748 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:10,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:11,768 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$So], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:11,778 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$El], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:12,789 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$To], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:12,798 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:13,807 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:13,818 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:14,829 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:14,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:15,848 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:15,857 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Il], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:16,868 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:16,878 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:17,888 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:17,898 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:18,908 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:18,918 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ll], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:19,928 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:19,938 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ml], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:20,948 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:20,958 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:21,968 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:21,978 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ol], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:22,499 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:47:22,988 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:22,998 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:24,008 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:24,017 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ql], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:25,028 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:25,038 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:26,048 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:26,057 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:27,068 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:27,077 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:28,088 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:28,098 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ul], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:29,108 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:29,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:30,128 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:30,137 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:31,148 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:31,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:32,168 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:32,177 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:33,189 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:33,197 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zl], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:34,208 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:34,218 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:35,228 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:35,237 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:36,248 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:36,257 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:37,268 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:37,278 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:38,289 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:38,298 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:39,309 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:39,318 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:40,328 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:40,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:41,348 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:41,358 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:42,368 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:42,378 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:43,388 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:43,398 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:43,539 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:47:44,409 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:44,418 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:45,428 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:45,438 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~l], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:46,448 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:46,457 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:47,468 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:47,477 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:48,488 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:48,497 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:49,508 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:49,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:50,528 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:50,537 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:51,548 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:51,558 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:52,568 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:52,578 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:53,588 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:53,597 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:54,608 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:54,617 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:55,628 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:55,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:56,648 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:56,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:57,668 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:57,677 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:58,688 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:58,698 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:59,708 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:47:59,718 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:00,728 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:00,737 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:01,749 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:01,757 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:02,769 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:02,778 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:03,788 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:03,797 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:04,579 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:48:04,808 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:04,817 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:05,828 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:05,838 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:06,848 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:06,857 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:07,868 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:07,878 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:08,889 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:08,897 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:09,909 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:09,918 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:10,929 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:10,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:11,948 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:11,958 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:12,968 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:12,978 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Am], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:13,988 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:13,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:15,009 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:15,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:16,029 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:16,039 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:16,940 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Continuous Deletion and Addition Of Flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Verification After Continuous Deletion and Addition Of Flows for 5 iterations 2025-08-16T04:48:17,049 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:17,058 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Em], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:17,555 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before and After Continuous Deletion and Addition Of Flows for 5 iterations" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Current Term Comparison Before and After Continuous Deletion and Addition Of Flows for 5 iterations 2025-08-16T04:48:17,949 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete All Flows From Follower Node" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Delete All Flows From Follower Node 2025-08-16T04:48:18,069 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:18,079 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:19,088 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:19,098 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:20,108 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:20,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:21,128 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:21,138 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Im], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:22,149 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:22,158 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:23,168 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:23,177 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Km], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:24,188 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:24,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:25,209 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:25,218 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:25,619 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:48:26,229 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:26,238 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:27,249 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:27,258 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Om], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:28,268 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:28,278 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:29,288 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:29,298 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:30,309 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:30,318 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:31,328 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:31,337 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:32,348 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:32,357 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:33,369 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:33,378 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Um], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:34,389 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:34,398 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:35,408 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:35,418 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:36,428 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:36,438 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:37,448 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:37,458 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ym], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:38,468 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:38,477 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zm], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:39,488 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:39,498 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:40,508 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:40,518 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:41,528 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:41,538 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:42,549 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:42,558 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:43,568 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:43,578 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:44,588 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:44,598 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:45,608 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:45,617 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:46,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:46,637 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:46,659 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:48:47,648 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:47,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:48,668 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:48,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:49,688 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:49,698 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:50,708 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:50,717 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~m], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:51,728 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:51,737 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$an], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:52,749 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:52,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:53,768 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:53,777 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:54,788 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:54,798 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:55,808 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:55,818 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$en], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:56,828 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:56,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:57,848 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:57,858 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:58,868 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:58,877 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:59,888 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:48:59,898 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$in], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:00,909 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:00,918 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:01,929 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:01,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:02,948 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:02,958 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:03,968 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:03,978 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:04,989 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:04,997 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:06,008 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:06,017 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$on], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:07,028 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:07,037 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:07,699 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:49:08,049 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:08,057 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:09,068 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:09,078 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:10,088 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:10,098 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:11,108 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:11,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:12,128 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:12,138 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:13,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:13,158 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:14,169 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:14,178 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:15,188 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:15,198 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:16,208 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:16,218 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:17,228 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:17,238 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:18,248 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:18,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$An], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:19,268 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:19,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:20,288 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:20,298 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:21,308 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:21,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:22,328 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:22,337 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$En], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:23,348 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:23,358 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:24,368 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:24,378 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:25,389 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:25,398 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:26,409 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:26,418 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$In], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:27,428 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:27,438 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:28,448 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:28,458 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:28,740 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:49:29,472 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:29,478 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ln], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:30,488 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:30,498 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:31,508 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:31,517 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:32,528 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:32,537 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$On], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:33,548 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:33,558 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:34,568 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:34,578 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:35,589 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:35,598 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:36,608 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:36,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:37,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:37,638 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:38,649 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:38,658 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Un], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:39,669 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:39,678 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:40,688 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:40,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:41,709 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:41,718 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:42,729 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:42,737 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:43,748 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:43,758 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zn], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:44,768 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:44,778 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:45,788 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:45,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:46,808 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:46,818 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:47,828 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:47,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:48,848 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:48,857 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:49,779 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:49:49,869 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:49,878 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:50,889 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:50,898 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:51,909 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:51,918 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:52,928 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:52,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:53,949 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:53,958 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:54,968 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:54,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:55,989 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:55,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~n], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:57,009 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:57,018 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:58,028 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:58,038 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:59,058 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:49:59,439 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:00,078 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:01,098 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:01,154 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:02,119 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:02,833 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:03,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:03,848 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:04,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:04,868 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:05,178 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:05,889 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:06,198 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:06,911 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:07,218 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:07,929 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:08,238 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:08,948 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:09,257 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:09,968 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:10,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$no], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:10,819 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:50:10,988 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:11,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:12,008 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:12,318 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:13,028 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:13,338 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:14,053 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:14,358 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:15,068 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:15,378 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$so], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:16,089 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:16,398 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$to], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:17,108 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:17,418 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:18,129 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:18,437 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:19,149 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:19,458 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:20,168 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:20,478 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:21,188 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:21,498 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:22,209 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:22,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:23,228 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:23,538 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ao], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:24,248 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:24,559 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:25,268 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:25,578 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Co], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:26,288 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:26,598 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Do], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:27,308 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:27,618 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:28,328 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:28,638 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:29,348 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:29,658 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Go], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:30,368 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:30,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ho], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:31,389 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:31,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Io], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:31,859 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:50:32,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:32,718 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:33,429 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:33,738 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ko], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:34,449 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:34,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:35,470 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:35,778 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:36,488 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:36,797 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$No], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:37,508 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:37,817 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:38,528 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:38,837 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Po], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:39,548 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:39,857 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:40,568 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:40,878 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ro], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:41,588 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:41,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$So], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:42,608 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:42,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$To], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:43,628 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:43,938 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:44,649 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:44,957 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:45,669 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:45,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:46,688 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:46,997 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:47,708 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:48,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:48,728 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:49,038 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zo], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:49,749 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$as], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:50,058 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:50,768 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:51,078 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:51,789 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:52,098 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:52,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:52,899 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:50:53,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:53,829 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:54,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:54,848 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:55,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:55,869 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:56,178 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:56,888 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:57,198 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:57,908 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:58,218 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:58,928 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:59,238 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:50:59,949 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:00,258 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:00,969 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:01,278 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~o], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:01,988 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:02,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:03,008 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:03,318 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:04,029 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:04,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:05,048 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:05,357 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:06,069 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:06,378 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:07,098 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:07,398 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:08,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:08,418 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:09,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:09,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:10,158 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:10,458 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:11,178 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:11,478 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:12,198 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:12,498 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:13,219 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:13,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:13,939 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:51:14,239 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:14,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:15,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:15,558 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:16,279 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$As], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:16,578 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:17,299 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:17,598 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:18,329 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:18,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:19,349 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:19,638 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:20,368 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:20,658 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:21,389 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:21,678 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:22,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:22,698 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:23,428 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:23,718 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:24,469 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:24,738 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:25,489 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:25,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:26,508 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:26,778 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:27,528 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:27,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:28,548 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:28,818 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ap], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:29,569 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:29,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:30,589 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:30,858 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:31,608 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:31,878 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:32,631 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:32,897 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ep], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:33,650 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:33,918 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:34,668 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:34,938 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:34,979 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:51:35,689 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:35,957 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:36,709 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:36,979 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ip], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:37,728 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:37,998 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:38,749 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:39,018 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:39,768 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:40,038 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:40,789 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:41,058 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:41,809 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:42,078 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Np], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:42,829 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:43,098 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Op], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:43,849 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:44,119 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:44,868 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:45,138 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:45,889 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:46,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:46,910 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:47,178 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:47,928 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:48,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:48,949 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:49,218 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Up], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:49,969 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:50,238 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:50,989 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:51,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:52,008 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:52,278 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:53,028 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:53,298 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:54,050 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:54,318 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zp], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:55,069 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$at], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:55,339 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:56,020 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:51:56,090 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:56,358 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:57,109 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:57,379 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:58,129 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:58,398 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:59,149 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:51:59,418 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:00,170 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:00,438 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:01,188 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:01,461 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:02,209 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:02,478 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:03,228 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$it], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:03,498 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:04,248 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:04,518 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:05,269 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:05,538 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:06,289 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:06,558 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~p], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:07,309 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:07,578 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:08,329 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:08,598 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:09,349 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:09,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:10,370 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:10,638 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:11,388 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:11,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:12,409 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:12,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:13,429 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$st], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:13,698 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:14,449 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:14,718 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:15,468 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:15,737 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:16,488 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:16,758 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:17,059 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:52:17,509 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:17,778 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:18,528 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:18,798 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:19,549 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:19,818 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:20,569 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:20,838 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:21,588 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$At], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:21,858 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:22,609 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:22,878 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:23,628 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:23,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:24,648 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:24,918 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:25,669 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:25,938 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:26,689 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:26,958 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:27,710 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:27,978 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:28,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:28,998 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:29,749 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$It], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:30,019 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:30,769 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:31,039 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:31,789 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:32,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:32,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:33,078 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:33,829 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:34,098 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:34,848 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:35,117 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:35,868 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:36,139 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:36,888 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:37,157 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:37,909 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:38,100 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:52:38,178 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:38,928 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:39,198 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:39,948 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$St], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:40,217 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:40,968 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:41,238 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:41,989 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:42,257 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:43,008 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:43,278 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:44,028 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:44,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:45,048 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:45,317 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:46,069 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:46,337 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:47,089 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:47,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:48,108 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:48,378 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:49,128 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:49,398 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:50,149 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:50,417 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:51,169 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:51,438 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:52,188 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:52,458 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:53,208 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:53,477 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:54,228 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:54,498 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:55,249 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:55,518 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:56,269 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:56,538 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:57,288 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:57,558 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:58,309 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:58,578 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:59,139 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:52:59,328 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:52:59,598 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zq], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:00,349 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:00,617 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:01,369 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:01,638 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:02,388 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:02,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:03,409 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:03,678 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:04,429 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:04,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:05,448 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:05,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:06,469 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:06,738 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:07,488 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:07,758 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:08,509 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:08,778 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:09,528 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:09,798 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:10,549 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:10,818 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:11,569 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:11,838 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~q], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:12,588 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:12,858 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:13,608 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:13,878 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:14,629 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:14,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:15,648 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:15,918 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:16,669 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:16,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:17,688 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:17,958 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:18,708 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:18,978 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:19,728 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:19,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:20,179 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:53:20,749 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:21,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:21,768 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:22,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:22,788 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:23,059 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:23,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:24,079 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:24,828 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:25,098 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:25,849 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:26,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:26,869 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:27,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:27,890 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:28,158 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:28,909 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:29,177 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:29,928 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:30,198 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:30,948 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:31,218 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:31,969 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:32,238 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:32,989 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:33,258 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:34,008 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:34,278 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:35,032 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:35,298 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:36,049 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:36,317 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:37,069 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:37,339 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:38,089 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:38,357 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:39,108 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:39,378 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ar], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:40,128 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:40,398 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Br], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:41,148 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:41,219 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:53:41,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:42,169 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:42,438 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:43,188 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:43,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Er], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:44,208 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:44,478 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:45,229 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:45,498 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:46,249 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:46,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:47,268 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:47,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ir], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:48,288 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:48,558 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:49,308 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:49,578 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:50,329 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:50,598 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:51,349 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:51,618 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:52,368 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:52,638 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:53,389 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:53,658 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Or], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:54,409 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:54,678 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:55,428 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:55,698 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:56,448 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:56,718 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:57,469 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:57,738 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:58,489 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:58,758 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:59,509 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:53:59,778 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ur], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:00,528 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:00,798 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:01,549 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:01,817 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:02,260 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:54:02,568 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:02,838 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:03,588 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:03,858 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:04,608 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:04,878 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zr], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:05,628 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:05,898 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:06,649 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:06,919 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:07,669 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:07,939 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:08,688 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:08,978 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:09,709 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:09,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:10,729 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:11,019 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:11,748 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:12,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:12,768 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:13,058 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:13,789 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:14,077 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:14,808 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:15,099 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:15,828 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:16,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:16,848 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:17,139 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~r], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:17,868 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:18,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$as], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:18,888 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:19,178 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:19,909 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:20,198 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:20,928 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:21,219 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:21,948 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:22,238 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:22,969 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:23,259 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:23,299 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:54:23,989 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:24,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:25,008 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:25,298 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:26,028 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:26,318 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:27,048 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:27,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:28,068 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:28,358 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:29,088 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:29,378 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:30,109 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:30,398 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:31,129 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:31,418 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:32,149 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:32,437 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:33,169 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:33,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:34,188 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:34,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:35,208 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:35,499 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:36,228 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:36,518 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:37,248 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:37,539 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:38,268 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:38,559 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:39,288 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:39,577 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:40,309 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:40,598 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:41,328 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:41,618 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:42,348 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:42,638 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:43,368 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:43,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:44,339 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:54:44,388 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:44,678 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$As], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:45,409 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:45,698 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:46,428 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:46,718 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:47,448 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:47,739 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ds], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:48,469 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:48,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Es], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:49,488 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:49,778 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:50,509 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:50,798 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:51,528 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:51,818 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:52,549 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:52,838 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Is], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:53,568 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:53,858 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Js], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:54,588 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:54,878 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ks], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:55,609 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:55,898 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ls], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:56,628 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:56,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ms], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:57,648 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:57,938 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ns], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:58,669 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:58,958 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Os], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:59,354 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Verify No Flows In Cluster After Flow Deletion" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.Verify No Flows In Cluster After Flow Deletion 2025-08-16T04:54:59,688 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:54:59,977 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ps], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:00,709 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:00,998 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:01,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:02,018 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:02,749 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:03,038 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ss], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:03,768 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:04,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ts], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:04,788 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:05,078 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Us], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:05,379 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:55:05,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:06,098 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:06,829 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:07,118 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ws], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:07,849 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:08,137 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:08,868 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:09,158 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ys], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:09,888 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:10,178 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zs], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:10,908 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:11,197 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:11,929 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:12,218 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:12,948 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:13,238 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:13,968 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:14,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:14,989 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:15,278 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:16,009 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:16,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:17,028 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:17,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:18,048 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:18,338 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:19,069 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:19,358 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:20,089 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:20,379 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:21,109 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:21,398 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:22,128 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:22,417 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~s], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:23,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:23,438 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$at], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:24,168 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:24,458 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:25,188 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:25,479 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:26,209 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:26,420 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:55:26,498 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:27,229 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:27,518 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:28,249 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:28,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:29,270 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:29,558 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:30,288 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:30,578 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:31,308 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:31,599 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$it], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:32,329 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:32,618 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:33,349 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:33,638 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:34,368 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:34,658 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:35,388 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:35,678 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:36,408 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:36,698 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:37,428 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:37,717 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:38,448 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:38,738 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:39,468 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:39,758 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:40,488 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:40,778 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:41,508 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:41,801 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$st], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:42,529 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:42,818 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:43,548 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:43,838 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:44,568 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:44,858 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:45,589 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:45,878 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:46,609 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:46,898 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:47,459 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:55:47,628 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:47,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:48,648 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:48,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:49,668 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:49,957 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$At], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:50,687 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:50,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:51,708 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:51,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ct], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:52,728 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:53,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:53,748 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:54,038 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Et], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:54,768 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:55,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ft], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:55,789 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:56,078 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:56,809 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:57,098 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ht], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:57,829 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:58,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$It], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:58,849 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:59,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:55:59,869 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:00,159 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:00,888 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:01,178 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:01,908 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:02,198 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:02,928 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:03,218 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:03,948 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:04,238 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ot], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:04,968 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:05,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:05,988 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:06,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:07,008 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:07,299 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:08,029 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:08,318 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$St], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:08,499 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-39 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:56:09,048 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:09,339 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:10,069 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:10,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ut], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:11,087 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:11,378 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:12,108 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:12,398 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:13,127 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:13,418 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:14,148 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:14,438 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:15,168 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:15,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zt], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:16,189 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:16,478 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:17,208 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:17,498 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:18,228 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:18,518 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:19,248 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:19,539 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:20,271 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:20,558 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:21,288 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:21,578 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:22,307 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:22,598 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:23,328 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:23,618 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:24,348 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:24,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:25,368 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:25,659 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:26,388 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:26,679 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:27,408 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:27,699 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~t], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:28,428 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:28,718 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:29,449 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:29,538 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:56:29,738 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:30,468 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:30,757 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:31,489 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:31,778 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:32,507 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:32,799 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:33,528 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:33,819 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:34,548 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:34,838 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:35,568 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:35,858 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:36,588 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:36,878 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:37,608 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:37,899 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:38,628 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:38,918 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:39,648 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:39,938 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:40,668 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:40,958 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:41,689 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:41,979 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:42,708 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:42,998 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:43,728 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:44,018 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:44,748 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:45,039 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:45,768 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:46,058 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:46,788 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:47,079 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:47,807 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:48,098 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:48,828 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:49,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:49,847 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:50,139 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:50,579 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-48 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:56:50,868 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:51,158 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:51,888 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:52,178 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:52,908 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:53,198 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:53,928 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:54,218 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:54,948 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:55,238 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Au], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:55,967 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:56,258 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:56,988 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:57,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:58,008 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:58,298 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Du], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:59,027 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:56:59,319 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Eu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:00,048 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:00,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:01,068 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:01,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:02,088 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:02,378 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:03,107 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:03,399 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:04,128 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:04,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ju], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:05,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:05,438 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ku], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:06,169 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:06,458 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:07,187 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:07,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:08,208 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:08,498 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:09,228 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:09,518 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ou], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:10,247 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:10,538 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:11,268 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:11,559 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:11,619 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:57:12,288 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:12,579 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ru], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:13,308 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:13,599 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Su], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:14,329 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:14,618 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:15,348 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:15,638 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:16,368 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:16,659 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:17,388 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:17,678 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:18,408 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:18,698 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:19,428 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:19,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:20,448 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:20,737 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zu], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:21,468 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:21,758 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:22,488 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$by], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:22,778 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:23,508 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:23,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:24,529 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:24,818 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:25,548 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:25,838 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:26,567 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:26,858 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:27,587 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:27,878 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:28,608 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:28,897 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:29,628 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:29,918 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:30,648 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:30,939 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:31,668 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:31,958 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:32,659 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-31 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:57:32,688 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:32,978 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~u], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:33,708 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$my], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:33,998 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:34,728 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:35,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:35,749 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:36,038 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:36,768 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:37,058 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:37,788 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:38,078 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:38,808 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:39,098 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:39,827 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:40,119 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:40,847 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:41,138 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:41,868 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:42,158 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:42,888 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:43,179 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:43,908 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:44,198 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:44,929 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:45,218 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:45,947 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:46,239 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:46,968 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:47,259 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:47,988 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:48,278 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:49,008 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$By], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:49,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:50,028 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:50,318 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:51,048 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:51,338 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:52,068 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:52,358 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:53,087 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:53,378 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:53,700 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-5 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:57:54,107 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:54,398 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:55,128 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:55,418 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:56,148 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:56,438 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:57,168 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:57,458 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:58,187 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:58,479 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:59,208 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:57:59,499 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:00,227 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$My], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:00,518 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Av], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:01,247 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:01,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:02,267 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:02,559 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:03,287 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:03,578 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:04,261 | INFO | sshd-SshServer[7be65bf8](port=8101)-timer-thread-1 | ServerSessionImpl | 125 - org.apache.sshd.osgi - 2.14.0 | Disconnecting(ServerSessionImpl[karaf@/10.30.171.78:42850]): SSH2_DISCONNECT_PROTOCOL_ERROR - Detected IdleTimeout after 1800075/1800000 ms. 2025-08-16T04:58:04,307 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:04,597 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ev], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:05,327 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:05,618 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:06,348 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:06,638 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:07,367 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:07,658 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:08,388 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:08,679 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:09,407 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:09,698 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:10,427 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:10,718 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:11,448 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:11,739 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:12,468 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:12,758 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:13,487 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:13,779 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:14,508 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:14,739 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:58:14,798 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ov], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:15,528 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:15,819 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:16,549 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:16,838 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:17,568 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:17,858 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:18,588 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:18,878 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:19,607 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:19,898 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:20,627 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:20,918 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:21,648 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:21,938 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:22,668 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:22,959 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:23,687 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:23,978 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:24,709 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:24,998 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:25,727 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:26,018 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zv], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:26,748 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$az], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:27,038 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:27,767 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:28,059 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:28,788 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:29,078 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:29,808 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:30,099 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:30,828 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ez], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:31,118 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:31,847 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:32,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:32,867 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:33,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:33,887 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:34,178 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:34,907 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:35,198 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:35,779 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:58:35,927 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:36,218 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:36,947 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:37,238 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:37,967 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:38,258 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~v], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:38,988 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:39,279 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:40,007 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:40,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:41,027 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:41,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:42,048 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:42,338 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:43,067 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:43,358 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:44,088 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:44,378 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:45,107 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:45,398 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:46,127 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:46,420 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:47,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:47,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:48,170 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:48,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:49,188 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:49,478 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:50,207 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:50,498 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:51,228 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:51,518 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:52,248 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:52,538 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:53,268 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Az], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:53,558 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:54,287 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:54,577 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:55,308 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:55,598 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:56,328 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:56,618 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:56,820 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:58:57,348 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ez], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:57,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:58,367 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:58,658 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:59,388 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:58:59,678 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:00,407 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:00,698 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:01,429 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:01,718 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:02,447 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:02,738 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:03,468 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:03,758 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:04,487 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:04,778 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:05,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:05,798 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Aw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:06,528 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:06,818 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:07,547 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:07,839 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:08,568 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:08,859 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:09,588 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:09,878 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ew], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:10,607 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:10,898 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:11,628 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:11,918 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:12,647 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:12,938 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:13,667 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:13,958 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:14,687 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:14,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:15,707 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:15,998 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:16,727 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:17,018 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:17,748 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:17,859 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:59:18,038 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:18,768 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:19,058 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:19,787 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:20,078 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ow], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:20,807 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:21,098 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Pw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:21,827 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:22,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:22,847 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:23,137 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:23,868 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:24,158 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:24,887 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:25,178 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:25,907 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:26,198 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:26,927 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:27,218 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:27,948 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:28,238 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ww], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:28,967 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:29,258 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:29,988 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:30,279 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:31,007 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~z], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:31,298 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zw], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:32,028 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:32,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:33,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:33,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:34,068 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:34,358 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:35,088 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:35,378 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:36,108 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:36,398 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:37,127 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:37,418 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:38,148 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:38,438 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:38,899 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T04:59:39,167 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:39,458 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:40,188 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:40,478 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:41,207 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:41,498 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:42,228 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:42,518 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:43,248 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:43,538 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~w], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:44,268 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:44,558 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:45,287 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:45,578 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:46,307 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:46,598 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:47,328 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:47,618 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:48,347 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:48,638 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:49,367 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:49,658 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:50,388 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:50,678 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:51,407 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:51,698 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:52,428 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:52,718 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:53,447 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:53,738 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:54,467 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:54,758 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:55,487 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:55,778 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:56,507 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:56,798 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:57,528 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:57,819 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:58,547 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$AA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:58,839 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:59,567 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$BA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:59,858 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T04:59:59,939 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-14 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:00:00,587 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$CA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:00,878 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:01,607 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$DA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:01,898 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:02,627 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$EA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:02,918 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:03,647 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$FA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:03,938 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:04,667 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$GA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:04,958 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:05,687 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$HA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:05,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:06,707 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$IA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:06,999 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:07,727 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$JA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:08,018 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:08,747 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$KA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:09,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:09,768 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$LA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:10,058 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:10,788 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$MA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:11,078 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ax], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:11,807 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$NA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:12,098 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Bx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:12,828 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$OA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:13,118 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:13,847 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$PA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:14,138 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:14,867 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$QA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:15,158 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ex], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:15,887 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$RA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:16,178 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:16,907 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$SA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:17,198 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:17,928 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$TA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:18,218 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:18,947 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$UA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:19,238 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ix], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:19,967 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$VA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:20,259 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:20,969 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:00:20,987 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$WA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:21,279 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Kx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:22,007 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$XA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:22,298 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Lx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:23,027 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$YA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:23,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Mx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:24,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ZA], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:24,338 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Nx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:25,067 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:25,358 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ox], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:26,087 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:26,378 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Px], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:27,107 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:27,398 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:28,128 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:28,418 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Rx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:29,148 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:29,438 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:30,167 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:30,459 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Tx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:31,187 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:31,479 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ux], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:32,208 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:32,498 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:33,227 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:33,518 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:34,247 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:34,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:35,267 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:35,558 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:36,287 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~A], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:36,578 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zx], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:37,308 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:37,598 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:38,327 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:38,620 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:39,347 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:39,638 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:40,367 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:40,658 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:41,387 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:41,678 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:42,009 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:00:42,407 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:42,698 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:43,426 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:43,719 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:44,447 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:44,739 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:45,467 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:45,758 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:46,487 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:46,778 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:47,506 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:47,798 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:48,527 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:48,819 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~x], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:49,547 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:49,839 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:50,568 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:50,858 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$by], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:51,586 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:51,878 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:52,607 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:52,898 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:53,627 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:53,918 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:54,647 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:54,939 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:55,667 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:55,958 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:56,687 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:56,978 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:57,707 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:57,998 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:58,727 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:59,018 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:00:59,747 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:00,038 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:00,767 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:01,059 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:01,787 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:02,078 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$my], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:02,807 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:03,039 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-40 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:01:03,098 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:03,827 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$AB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:04,118 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:04,847 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$BB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:05,138 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:05,868 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$CB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:06,158 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:06,887 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$DB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:07,178 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:07,907 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$EB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:08,198 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:08,927 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$FB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:09,218 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:09,947 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$GB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:10,238 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:10,967 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$HB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:11,259 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:11,987 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$IB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:12,278 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:13,007 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$JB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:13,299 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:14,027 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$KB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:14,318 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:15,047 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$LB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:15,338 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:16,067 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$MB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:16,358 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ay], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:17,087 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$NB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:17,378 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$By], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:18,107 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$OB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:18,399 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Cy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:19,127 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$PB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:19,418 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Dy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:20,148 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$QB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:20,438 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ey], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:21,167 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$RB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:21,458 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Fy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:22,187 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$SB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:22,479 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Gy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:23,207 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$TB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:23,499 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Hy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:24,079 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:01:24,237 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$UB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:24,519 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Iy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:25,257 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$VB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:25,538 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Jy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:26,277 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$WB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:26,558 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ky], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:27,297 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$XB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:27,578 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ly], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:28,317 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$YB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:28,598 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$My], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:29,337 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ZB], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:29,618 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ny], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:30,357 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:30,638 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Oy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:31,377 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:31,659 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Py], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:32,397 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:32,678 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Qy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:33,417 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:33,699 | INFO | opendaylight-cluster-data-shard-dispatcher-37 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ry], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:34,438 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:34,718 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Sy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:35,457 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:35,738 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Ty], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:36,477 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:36,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Uy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:37,498 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:37,778 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Vy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:38,517 | INFO | opendaylight-cluster-data-shard-dispatcher-36 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:38,798 | INFO | opendaylight-cluster-data-shard-dispatcher-45 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Wy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:39,537 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:39,818 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Xy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:40,098 | INFO | pipe-log:log "ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.PreLeader Verification" | core | 112 - org.apache.karaf.log.core - 4.4.7 | ROBOT MESSAGE: Starting test openflowplugin-clustering-bulkomatic.txt.Cluster Current Term Verification 3Node Cluster.PreLeader Verification 2025-08-16T05:01:40,557 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:40,839 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Yy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:41,577 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~B], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:41,858 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$Zy], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:42,597 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$aC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:42,878 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$0y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:43,616 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:43,898 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$1y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:44,637 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:44,919 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$2y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:45,119 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-13 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:01:45,657 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:45,938 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$3y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:46,677 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$eC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:46,959 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$4y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:47,696 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:47,978 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$5y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:48,717 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:48,999 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$6y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:49,736 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:50,018 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$7y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:50,758 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:51,038 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$8y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:51,778 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:52,059 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$9y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:52,797 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:53,078 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$+y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:53,817 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:54,098 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$~y], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:54,837 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:55,118 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$az], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:55,858 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$nC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:56,138 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$bz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:56,877 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$oC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:57,158 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$cz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:57,936 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$pC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:58,178 | INFO | opendaylight-cluster-data-shard-dispatcher-38 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$dz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:59,009 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$qC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:01:59,198 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$ez], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:00,027 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$rC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:00,218 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$fz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:01,047 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$sC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:01,239 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$gz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:02,067 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$tC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:02,258 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$hz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:03,087 | INFO | opendaylight-cluster-data-shard-dispatcher-43 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$uC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:03,279 | INFO | opendaylight-cluster-data-shard-dispatcher-44 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$iz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:04,106 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$vC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:04,299 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$jz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:05,127 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$wC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:05,318 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$kz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:06,147 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$xC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:06,159 | WARN | opendaylight-cluster-data-pekko.actor.default-dispatcher-41 | AbstractShardBackendResolver | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | Failed to resolve shard java.util.concurrent.TimeoutException: Shard has no current leader at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:167) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:145) ~[bundleFile:?] at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver$1.onComplete(AbstractShardBackendResolver.java:138) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:309) ~[bundleFile:?] at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234) ~[bundleFile:?] at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231) ~[bundleFile:?] at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:484) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73) ~[bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110) ~[bundleFile:?] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) [bundleFile:?] at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94) [bundleFile:?] at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110) [bundleFile:?] at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59) [bundleFile:?] at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57) [bundleFile:?] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) [?:?] at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) [?:?] at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) [?:?] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) [?:?] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) [?:?] Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-inventory-config currently has no leader. Try again later. at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.onShardNotInitializedTimeout(ShardManager.java:628) ~[bundleFile:?] at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.handleCommand(ShardManager.java:242) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) ~[bundleFile:?] at scala.PartialFunction$OrElse.apply(PartialFunction.scala:266) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:88) ~[bundleFile:?] at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:27) ~[bundleFile:?] at scala.PartialFunction.applyOrElse(PartialFunction.scala:214) ~[bundleFile:?] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:213) ~[bundleFile:?] at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:27) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[bundleFile:?] at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.org$apache$pekko$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:931) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive(Eventsourced.scala:256) ~[bundleFile:?] at org.apache.pekko.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:255) ~[bundleFile:?] at org.apache.pekko.persistence.AbstractPersistentActor.aroundReceive(PersistentActor.scala:305) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[bundleFile:?] at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) ~[bundleFile:?] at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) ~[bundleFile:?] ... 5 more 2025-08-16T05:02:06,338 | INFO | opendaylight-cluster-data-shard-dispatcher-42 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$lz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:07,166 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-1-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.62:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$yC], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false. 2025-08-16T05:02:07,358 | INFO | opendaylight-cluster-data-shard-dispatcher-46 | Shard | 196 - org.opendaylight.controller.sal-distributed-datastore - 11.0.0 | member-3-shard-inventory-config: not currently leader, rejecting request ConnectClientRequest{target=ClientIdentifier{frontend=member-2-frontend-datastore-config, generation=2}, sequence=0, replyTo=Actor[pekko://opendaylight-cluster-data@10.30.170.196:2550/temp/_user_shardmanager-config_member-3-shard-inventory-config$mz], minVersion=POTASSIUM, maxVersion=POTASSIUM}. isLeader: true, isLeaderActive: false,isLeadershipTransferInProgress: false.